Jan 26 00:08:41 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 00:08:42 crc kubenswrapper[5124]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:08:42 crc kubenswrapper[5124]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 00:08:42 crc kubenswrapper[5124]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:08:42 crc kubenswrapper[5124]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:08:42 crc kubenswrapper[5124]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 26 00:08:42 crc kubenswrapper[5124]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.200822 5124 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206666 5124 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206695 5124 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206701 5124 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206707 5124 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206712 5124 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206718 5124 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206723 5124 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206728 5124 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206734 5124 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206739 5124 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206746 5124 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206752 5124 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206757 5124 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206762 5124 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206767 5124 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206773 5124 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206777 5124 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206782 5124 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206787 5124 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206792 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206797 5124 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206802 5124 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206807 5124 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206812 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206817 5124 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206822 5124 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206827 5124 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206832 5124 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206836 5124 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206841 5124 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206846 5124 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206851 5124 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206856 5124 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206863 5124 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206869 5124 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206875 5124 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206880 5124 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206885 5124 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206890 5124 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206894 5124 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206899 5124 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206905 5124 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206910 5124 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206915 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206919 5124 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206926 5124 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206931 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206935 5124 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206940 5124 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206945 5124 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206950 5124 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206955 5124 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206960 5124 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206964 5124 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206969 5124 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206974 5124 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206979 5124 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206985 5124 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206990 5124 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206994 5124 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.206999 5124 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207004 5124 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207009 5124 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207014 5124 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207019 5124 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207023 5124 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207028 5124 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207033 5124 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207038 5124 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207042 5124 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207047 5124 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207052 5124 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207056 5124 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207062 5124 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207066 5124 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207072 5124 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207078 5124 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207083 5124 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207089 5124 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207095 5124 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207100 5124 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207105 5124 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207110 5124 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207115 5124 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207119 5124 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207126 5124 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207654 5124 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207664 5124 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207669 5124 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207679 5124 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207684 5124 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207689 5124 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207693 5124 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207699 5124 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207704 5124 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207708 5124 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207713 5124 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207718 5124 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207722 5124 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207727 5124 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207732 5124 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207737 5124 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207741 5124 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207746 5124 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207751 5124 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207757 5124 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207762 5124 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207767 5124 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207772 5124 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207776 5124 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207782 5124 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207787 5124 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207791 5124 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207796 5124 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207801 5124 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207806 5124 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207811 5124 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207816 5124 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207820 5124 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207826 5124 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207831 5124 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207838 5124 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207843 5124 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207848 5124 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207853 5124 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207858 5124 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207863 5124 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207868 5124 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207873 5124 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207878 5124 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207883 5124 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207888 5124 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207893 5124 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207898 5124 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207903 5124 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207908 5124 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207914 5124 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207922 5124 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207927 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207932 5124 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207939 5124 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207945 5124 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207951 5124 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207957 5124 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207962 5124 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207967 5124 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207972 5124 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207976 5124 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207981 5124 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207986 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207991 5124 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.207997 5124 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208003 5124 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208012 5124 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208018 5124 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208024 5124 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208030 5124 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208036 5124 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208042 5124 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208050 5124 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208057 5124 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208064 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208070 5124 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208075 5124 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208080 5124 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208113 5124 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208119 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208124 5124 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208129 5124 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208135 5124 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208140 5124 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.208144 5124 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208253 5124 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208264 5124 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208274 5124 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208287 5124 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208296 5124 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208302 5124 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208308 5124 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208316 5124 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208321 5124 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208327 5124 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208333 5124 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208339 5124 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208345 5124 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208354 5124 flags.go:64] FLAG: --cgroup-root="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208359 5124 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208364 5124 flags.go:64] FLAG: --client-ca-file="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208371 5124 flags.go:64] FLAG: --cloud-config="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208376 5124 flags.go:64] FLAG: --cloud-provider="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208382 5124 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208389 5124 flags.go:64] FLAG: --cluster-domain="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208394 5124 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208400 5124 flags.go:64] FLAG: --config-dir="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208405 5124 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208411 5124 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208417 5124 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208423 5124 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208429 5124 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208434 5124 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208440 5124 flags.go:64] FLAG: --contention-profiling="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208446 5124 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208451 5124 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208457 5124 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208462 5124 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208470 5124 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208475 5124 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208480 5124 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208486 5124 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208492 5124 flags.go:64] FLAG: --enable-server="true" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208498 5124 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208505 5124 flags.go:64] FLAG: --event-burst="100" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208511 5124 flags.go:64] FLAG: --event-qps="50" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208517 5124 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208523 5124 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208530 5124 flags.go:64] FLAG: --eviction-hard="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208537 5124 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208545 5124 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208551 5124 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208557 5124 flags.go:64] FLAG: --eviction-soft="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208562 5124 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208568 5124 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208574 5124 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208579 5124 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208607 5124 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208614 5124 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208619 5124 flags.go:64] FLAG: --feature-gates="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208626 5124 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208632 5124 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208638 5124 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208644 5124 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208649 5124 flags.go:64] FLAG: --healthz-port="10248" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208655 5124 flags.go:64] FLAG: --help="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208660 5124 flags.go:64] FLAG: --hostname-override="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208666 5124 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208671 5124 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208677 5124 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208682 5124 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208687 5124 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208693 5124 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208699 5124 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208704 5124 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208710 5124 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208716 5124 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208721 5124 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208727 5124 flags.go:64] FLAG: --kube-reserved="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208732 5124 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208737 5124 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208743 5124 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208751 5124 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208757 5124 flags.go:64] FLAG: --lock-file="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208762 5124 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208768 5124 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208773 5124 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208781 5124 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208787 5124 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208792 5124 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208797 5124 flags.go:64] FLAG: --logging-format="text" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208802 5124 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208808 5124 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208814 5124 flags.go:64] FLAG: --manifest-url="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208820 5124 flags.go:64] FLAG: --manifest-url-header="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208827 5124 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208832 5124 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208840 5124 flags.go:64] FLAG: --max-pods="110" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208848 5124 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208853 5124 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208859 5124 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208864 5124 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208869 5124 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208875 5124 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208881 5124 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208894 5124 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208899 5124 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208904 5124 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208911 5124 flags.go:64] FLAG: --pod-cidr="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208916 5124 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208924 5124 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208929 5124 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208935 5124 flags.go:64] FLAG: --pods-per-core="0" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208940 5124 flags.go:64] FLAG: --port="10250" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208949 5124 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208955 5124 flags.go:64] FLAG: --provider-id="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208960 5124 flags.go:64] FLAG: --qos-reserved="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208965 5124 flags.go:64] FLAG: --read-only-port="10255" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208970 5124 flags.go:64] FLAG: --register-node="true" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208976 5124 flags.go:64] FLAG: --register-schedulable="true" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208981 5124 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208990 5124 flags.go:64] FLAG: --registry-burst="10" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.208995 5124 flags.go:64] FLAG: --registry-qps="5" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209001 5124 flags.go:64] FLAG: --reserved-cpus="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209006 5124 flags.go:64] FLAG: --reserved-memory="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209012 5124 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209018 5124 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209023 5124 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209028 5124 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209033 5124 flags.go:64] FLAG: --runonce="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209039 5124 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209047 5124 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209053 5124 flags.go:64] FLAG: --seccomp-default="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209059 5124 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209064 5124 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209071 5124 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209076 5124 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209082 5124 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209088 5124 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209093 5124 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209098 5124 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209104 5124 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209110 5124 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209116 5124 flags.go:64] FLAG: --system-cgroups="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209121 5124 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209130 5124 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209137 5124 flags.go:64] FLAG: --tls-cert-file="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209142 5124 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209156 5124 flags.go:64] FLAG: --tls-min-version="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209163 5124 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209170 5124 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209177 5124 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209184 5124 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209191 5124 flags.go:64] FLAG: --v="2" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209200 5124 flags.go:64] FLAG: --version="false" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209209 5124 flags.go:64] FLAG: --vmodule="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209223 5124 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.209230 5124 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209635 5124 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209647 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209652 5124 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209657 5124 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209663 5124 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209671 5124 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209676 5124 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209681 5124 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209686 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209691 5124 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209697 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209703 5124 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209708 5124 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209713 5124 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209718 5124 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209723 5124 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209729 5124 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209734 5124 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209738 5124 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209744 5124 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209750 5124 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209754 5124 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209759 5124 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209764 5124 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209768 5124 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209773 5124 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209778 5124 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209782 5124 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209788 5124 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209792 5124 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209797 5124 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209802 5124 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209806 5124 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209811 5124 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209816 5124 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209821 5124 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209826 5124 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209834 5124 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209838 5124 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209843 5124 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209850 5124 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209855 5124 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209860 5124 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209865 5124 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209870 5124 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209875 5124 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209880 5124 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209885 5124 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209890 5124 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209906 5124 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209910 5124 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209916 5124 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209920 5124 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209925 5124 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209930 5124 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209935 5124 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209940 5124 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209945 5124 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209950 5124 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209955 5124 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209960 5124 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209965 5124 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209969 5124 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209974 5124 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209979 5124 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209985 5124 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209992 5124 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.209998 5124 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210004 5124 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210013 5124 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210019 5124 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210025 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210031 5124 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210037 5124 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210043 5124 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210049 5124 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210055 5124 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210060 5124 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210065 5124 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210069 5124 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210074 5124 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210079 5124 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210085 5124 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210092 5124 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210099 5124 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.210104 5124 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.210112 5124 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.218621 5124 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.218653 5124 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218732 5124 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218739 5124 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218742 5124 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218746 5124 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218749 5124 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218753 5124 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218756 5124 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218760 5124 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218763 5124 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218766 5124 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218770 5124 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218774 5124 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218777 5124 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218780 5124 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218783 5124 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218786 5124 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218790 5124 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218794 5124 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218801 5124 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218804 5124 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218808 5124 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218811 5124 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218814 5124 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218818 5124 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218821 5124 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218825 5124 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218829 5124 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218832 5124 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218835 5124 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218838 5124 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218841 5124 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218844 5124 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218848 5124 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218851 5124 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218854 5124 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218857 5124 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218860 5124 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218863 5124 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218867 5124 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218870 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218874 5124 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218878 5124 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218881 5124 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218884 5124 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218888 5124 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218891 5124 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218894 5124 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218897 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218901 5124 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218905 5124 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218910 5124 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218913 5124 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218917 5124 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218920 5124 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218924 5124 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218927 5124 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218931 5124 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218934 5124 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218938 5124 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218942 5124 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218945 5124 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218948 5124 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218951 5124 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218955 5124 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218958 5124 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218962 5124 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218965 5124 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218968 5124 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218971 5124 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218975 5124 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218979 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218982 5124 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218986 5124 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218989 5124 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218993 5124 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218996 5124 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.218999 5124 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219003 5124 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219006 5124 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219010 5124 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219013 5124 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219016 5124 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219019 5124 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219023 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219026 5124 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219029 5124 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.219036 5124 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219137 5124 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219142 5124 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219146 5124 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219150 5124 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219154 5124 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219158 5124 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219161 5124 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219165 5124 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219168 5124 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219171 5124 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219175 5124 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219178 5124 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219182 5124 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219185 5124 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219189 5124 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219192 5124 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219196 5124 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219200 5124 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219203 5124 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219207 5124 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219210 5124 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219213 5124 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219217 5124 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219220 5124 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219223 5124 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219226 5124 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219230 5124 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219233 5124 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219237 5124 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219240 5124 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219243 5124 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219246 5124 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219249 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219254 5124 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219257 5124 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219260 5124 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219263 5124 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219268 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219271 5124 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219274 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219277 5124 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219281 5124 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219285 5124 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219289 5124 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219293 5124 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219297 5124 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219300 5124 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219303 5124 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219307 5124 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219311 5124 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219314 5124 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219317 5124 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219321 5124 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219324 5124 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219327 5124 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219330 5124 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219333 5124 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219337 5124 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219340 5124 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219343 5124 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219347 5124 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219350 5124 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219353 5124 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219357 5124 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219360 5124 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219363 5124 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219366 5124 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219370 5124 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219373 5124 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219377 5124 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219381 5124 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219384 5124 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219387 5124 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219391 5124 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219394 5124 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219397 5124 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219402 5124 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219406 5124 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219409 5124 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219413 5124 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219417 5124 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219421 5124 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219424 5124 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219427 5124 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219431 5124 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.219434 5124 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.219440 5124 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.219748 5124 server.go:962] "Client rotation is on, will bootstrap in background" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.221823 5124 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.225950 5124 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.226100 5124 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.227075 5124 server.go:1019] "Starting client certificate rotation" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.227214 5124 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.227261 5124 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.233363 5124 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.235051 5124 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.235947 5124 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.245785 5124 log.go:25] "Validated CRI v1 runtime API" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.262058 5124 log.go:25] "Validated CRI v1 image API" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.264078 5124 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.266228 5124 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-26-00-02-30-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.266255 5124 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.282173 5124 manager.go:217] Machine: {Timestamp:2026-01-26 00:08:42.280675416 +0000 UTC m=+0.189594785 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8 BootID:24413647-b67c-4e2e-bb9e-ac26cf92e744 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:20:53:c2 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:20:53:c2 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:47:77:5a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:44:84:ca Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1e:e9:a8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b9:57:24 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e6:75:80:2c:d4:e9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:12:c4:c6:be:1e:23 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.282445 5124 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.282744 5124 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.283663 5124 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.283707 5124 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.283885 5124 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.283895 5124 container_manager_linux.go:306] "Creating device plugin manager" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.283915 5124 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.284228 5124 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.284631 5124 state_mem.go:36] "Initialized new in-memory state store" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.284775 5124 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.285274 5124 kubelet.go:491] "Attempting to sync node with API server" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.285355 5124 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.285422 5124 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.285484 5124 kubelet.go:397] "Adding apiserver pod source" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.285559 5124 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.287050 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.287109 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.288217 5124 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.288300 5124 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.289283 5124 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.289362 5124 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.291199 5124 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.291458 5124 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.292354 5124 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.292829 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.292857 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.292868 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.292877 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.292886 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.292896 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.292908 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.292920 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.292942 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.292968 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.292982 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.293309 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.293970 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.293995 5124 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.294888 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.306651 5124 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.306724 5124 server.go:1295] "Started kubelet" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.306949 5124 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.307070 5124 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.306984 5124 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.307670 5124 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.308147 5124 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e1f4dba986272 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.306675314 +0000 UTC m=+0.215594673,LastTimestamp:2026-01-26 00:08:42.306675314 +0000 UTC m=+0.215594673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:42 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.308611 5124 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.308620 5124 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.308889 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.308977 5124 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.309002 5124 server.go:317] "Adding debug handlers to kubelet server" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.309201 5124 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.309228 5124 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.310283 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.310887 5124 factory.go:55] Registering systemd factory Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.310932 5124 factory.go:223] Registration of the systemd container factory successfully Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.311206 5124 factory.go:153] Registering CRI-O factory Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.311231 5124 factory.go:223] Registration of the crio container factory successfully Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.311206 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="200ms" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.311300 5124 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.311323 5124 factory.go:103] Registering Raw factory Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.311339 5124 manager.go:1196] Started watching for new ooms in manager Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.311983 5124 manager.go:319] Starting recovery of all containers Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.336462 5124 manager.go:324] Recovery completed Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.354639 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.357294 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.357347 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.357360 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.358111 5124 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.358123 5124 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.358144 5124 state_mem.go:36] "Initialized new in-memory state store" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.361767 5124 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.363088 5124 policy_none.go:49] "None policy: Start" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.363177 5124 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.363256 5124 state_mem.go:35] "Initializing new in-memory state store" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.364090 5124 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.364149 5124 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.364182 5124 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.364192 5124 kubelet.go:2451] "Starting kubelet main sync loop" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.364241 5124 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.366499 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.367932 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.367971 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.367986 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.367997 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368008 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368017 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368027 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368036 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368048 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368058 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368067 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368077 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368088 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368098 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368109 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368120 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368154 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368164 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368174 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368185 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368196 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368205 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368216 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368227 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368239 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368250 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368260 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368271 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368284 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368296 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368306 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368322 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368333 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368344 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368354 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368365 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368378 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368388 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368398 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368408 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368419 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368430 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368442 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368453 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368463 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368473 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368484 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368494 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368511 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368521 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368532 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368544 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368556 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368566 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368577 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368605 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368623 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368634 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368647 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368656 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368666 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368678 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368689 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368699 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368709 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368719 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368730 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368741 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368753 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368763 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368775 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368787 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368798 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368811 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368822 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368833 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368846 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368856 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368866 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368877 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368898 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368908 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368919 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368930 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368941 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368952 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368962 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368973 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368984 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.368994 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369004 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369014 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369026 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369036 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369047 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369058 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369070 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369082 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369093 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369104 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369118 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369129 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369139 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369151 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369165 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369177 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369188 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369199 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369209 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369219 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369229 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369239 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369284 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369297 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369309 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369319 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369333 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369344 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369354 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369365 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369376 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369386 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369397 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369407 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369417 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369427 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369440 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369450 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369461 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369471 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369483 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369493 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369505 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369515 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369524 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369534 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369547 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369558 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369569 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369579 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369681 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369693 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369705 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369715 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369727 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369737 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369747 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369757 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369767 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369777 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369787 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369797 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369807 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369818 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369827 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369837 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369847 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369859 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369871 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369881 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369892 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369902 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369913 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369924 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369935 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369944 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369954 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369964 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369976 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369986 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.369996 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370008 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370019 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370029 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370039 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370049 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370064 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370076 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370086 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370097 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370109 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370120 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370131 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370142 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370153 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370164 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370175 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370185 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370198 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370210 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370222 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370234 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370245 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370256 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370267 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370278 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370290 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370301 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370312 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370324 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370336 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370347 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370359 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370371 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370382 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370393 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370404 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370416 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370431 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370442 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370454 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370464 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370476 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.370488 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.374987 5124 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.375775 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.375801 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.375817 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.375829 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.375841 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.375852 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.375864 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.375875 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.375887 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.375898 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.375979 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.375993 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376005 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376019 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376030 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376042 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376053 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376066 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376078 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376090 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376101 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376114 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376127 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376140 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376155 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376173 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376185 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376197 5124 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376209 5124 reconstruct.go:97] "Volume reconstruction finished" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.376216 5124 reconciler.go:26] "Reconciler: start to sync state" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.409415 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.415722 5124 manager.go:341] "Starting Device Plugin manager" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.415811 5124 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.415904 5124 server.go:85] "Starting device plugin registration server" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.416631 5124 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.416651 5124 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.416891 5124 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.416976 5124 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.416990 5124 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.420317 5124 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.420355 5124 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.464525 5124 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.464716 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.465921 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.465960 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.465973 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.466533 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.466781 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.467095 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.469280 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.469330 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.469343 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.469614 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.469655 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.469668 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.471166 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.471420 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.471479 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.472162 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.472196 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.472231 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.472696 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.472728 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.472739 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.473442 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.473499 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.473534 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.473991 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.474016 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.474045 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.474081 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.474061 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.474137 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.474881 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.475001 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.475041 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.475559 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.475607 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.475622 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.475627 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.475648 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.475660 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.476713 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.476765 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.477251 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.477336 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.477372 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.491689 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.507478 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.512087 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="400ms" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.517166 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.517900 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.517938 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.517950 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.517973 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.518478 5124 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.529799 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.535977 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.542311 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578584 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578648 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578673 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578709 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578726 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578740 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578756 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578773 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578840 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578881 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578905 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578921 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578946 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578961 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578978 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.578991 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579005 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579018 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579106 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579276 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579302 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579083 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579385 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579387 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579408 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579427 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579447 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579610 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.579652 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.580089 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680491 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680491 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680578 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680669 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680707 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680726 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680742 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680772 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680781 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680789 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680832 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680853 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680817 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680782 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680744 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680787 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680937 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680957 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680977 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.680997 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.681025 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.681030 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.681059 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.681069 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.681072 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.681119 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.681109 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.681173 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.681228 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.681247 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.681300 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.681394 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.719218 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.720123 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.720178 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.720192 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.720220 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.720773 5124 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.792547 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.808019 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.820382 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-94ec0e5693707ab5ee15eafd16d94309a295e24cd5ccac86867f67634c39a98b WatchSource:0}: Error finding container 94ec0e5693707ab5ee15eafd16d94309a295e24cd5ccac86867f67634c39a98b: Status 404 returned error can't find the container with id 94ec0e5693707ab5ee15eafd16d94309a295e24cd5ccac86867f67634c39a98b Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.830321 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.830330 5124 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.831550 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-68e89ad7cf958184da500b484cc496b5337cf2b6d1a0c3c726ef6bcad9d2fa30 WatchSource:0}: Error finding container 68e89ad7cf958184da500b484cc496b5337cf2b6d1a0c3c726ef6bcad9d2fa30: Status 404 returned error can't find the container with id 68e89ad7cf958184da500b484cc496b5337cf2b6d1a0c3c726ef6bcad9d2fa30 Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.836529 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: I0126 00:08:42.843364 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.852693 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-8ec7a1be602355c7a2a55b75981f73c2c6611190e7d0d3371e162b4cea21ee4a WatchSource:0}: Error finding container 8ec7a1be602355c7a2a55b75981f73c2c6611190e7d0d3371e162b4cea21ee4a: Status 404 returned error can't find the container with id 8ec7a1be602355c7a2a55b75981f73c2c6611190e7d0d3371e162b4cea21ee4a Jan 26 00:08:42 crc kubenswrapper[5124]: W0126 00:08:42.853855 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-17d4f7782db76076bcad153af88af8ddcc8747c21fd150dce8cd5fe14315d155 WatchSource:0}: Error finding container 17d4f7782db76076bcad153af88af8ddcc8747c21fd150dce8cd5fe14315d155: Status 404 returned error can't find the container with id 17d4f7782db76076bcad153af88af8ddcc8747c21fd150dce8cd5fe14315d155 Jan 26 00:08:42 crc kubenswrapper[5124]: E0126 00:08:42.912965 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="800ms" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.121032 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.122328 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.122378 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.122389 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.122416 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:08:43 crc kubenswrapper[5124]: E0126 00:08:43.123149 5124 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Jan 26 00:08:43 crc kubenswrapper[5124]: E0126 00:08:43.168875 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:08:43 crc kubenswrapper[5124]: E0126 00:08:43.269765 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.295914 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.368931 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3"} Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.369043 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"07d6d6ad5a39ee52db84e75946124655db59b06d3d8d2cda453fb038d31f76c0"} Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.369192 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.369994 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.370041 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.370052 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[5124]: E0126 00:08:43.370241 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.370600 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6a4d65f95ca5f832e6ac85de46fd3d474221c3263ab1c2eba3123e4742fc5287"} Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.370632 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"8ec7a1be602355c7a2a55b75981f73c2c6611190e7d0d3371e162b4cea21ee4a"} Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.372054 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb"} Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.372080 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"17d4f7782db76076bcad153af88af8ddcc8747c21fd150dce8cd5fe14315d155"} Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.372198 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.372696 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.372735 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.372747 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[5124]: E0126 00:08:43.373004 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.373674 5124 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc" exitCode=0 Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.373723 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc"} Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.373738 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"68e89ad7cf958184da500b484cc496b5337cf2b6d1a0c3c726ef6bcad9d2fa30"} Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.373829 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.374624 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.374665 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.374677 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[5124]: E0126 00:08:43.374851 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.375378 5124 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393" exitCode=0 Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.375442 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393"} Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.375542 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"94ec0e5693707ab5ee15eafd16d94309a295e24cd5ccac86867f67634c39a98b"} Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.375850 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.376288 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.376846 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.376888 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.376905 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.376651 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.377071 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.377134 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[5124]: E0126 00:08:43.377346 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:43 crc kubenswrapper[5124]: E0126 00:08:43.443846 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:08:43 crc kubenswrapper[5124]: E0126 00:08:43.548302 5124 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e1f4dba986272 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.306675314 +0000 UTC m=+0.215594673,LastTimestamp:2026-01-26 00:08:42.306675314 +0000 UTC m=+0.215594673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:43 crc kubenswrapper[5124]: E0126 00:08:43.699946 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:08:43 crc kubenswrapper[5124]: E0126 00:08:43.714698 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="1.6s" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.923412 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.924402 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.924464 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.924479 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[5124]: I0126 00:08:43.924506 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:08:43 crc kubenswrapper[5124]: E0126 00:08:43.925817 5124 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.338534 5124 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.379670 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75"} Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.379742 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041"} Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.380983 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"f4382e3a3d54a3ceaf116dd5c6f7f458833943f7e948dc335bc038b3267463d7"} Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.381184 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.381768 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.381804 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.381814 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[5124]: E0126 00:08:44.381992 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.382336 5124 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3" exitCode=0 Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.382378 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3"} Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.382629 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.383225 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.383253 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.383263 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[5124]: E0126 00:08:44.383395 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.384698 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7da17ce8ac77c94210b966d6bc7b376e82189a903321c9800662d2c12abf965d"} Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.384727 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6a04fa4d6993fe4e83a7bd2d552bb16d9dc8e33e89a789170b8fec180c65b793"} Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.386308 5124 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb" exitCode=0 Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.386333 5124 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5" exitCode=0 Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.386351 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb"} Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.386366 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5"} Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.386551 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.387039 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.387083 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[5124]: I0126 00:08:44.387095 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[5124]: E0126 00:08:44.387324 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.391201 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"27a7b88896a26f50315b57e5bff7d5ec0511f09f0acb636c09e3c76caf1c686b"} Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.391251 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"56cb10ea63f74e8cb16b42dc94949b4ddf748e8fdf73c942fb868db9001364e2"} Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.391262 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b8da7cf7985b3076f734741cd805f8a4f273d7620fc89a9f9d02fa906489960c"} Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.391408 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.392544 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.392569 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.392578 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[5124]: E0126 00:08:45.392824 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.393940 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"effeb6003c974dc677094f47337b7bf2ba1dad9209e7f72af53b5ac7d069f3aa"} Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.393987 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.394649 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.394669 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.394679 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[5124]: E0126 00:08:45.394826 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.400181 5124 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625" exitCode=0 Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.400448 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.400287 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625"} Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.401712 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.401749 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.401763 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[5124]: E0126 00:08:45.401952 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.409680 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7cc17d906fe2ad8e3b9fa270994f12c6d3e8dda7bd0681854752228ac2ed2021"} Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.409716 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327"} Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.409726 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab"} Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.409842 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.410285 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.410303 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.410312 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[5124]: E0126 00:08:45.410480 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.480379 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.526061 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.526941 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.526972 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.526985 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[5124]: I0126 00:08:45.527005 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.415810 5124 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.415867 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.416106 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b64f819f442260b8aaac091fe6a09b99175d27d2ec944332d5977a5ca5af58f0"} Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.416137 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4647208e6c84a5a6977c9b5f4a59a5a2ec2b2957cb47ea0707851ab13bef96ba"} Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.416150 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"94702470d0dd24faac34520e06613c5897b79dde56d2897fabe3a52050980120"} Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.416160 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"cd70d62ee532dd5a0aa8e04beb99f336153670709121aa892e5fa90aca675a40"} Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.416259 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.416774 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.416804 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.416812 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:46 crc kubenswrapper[5124]: E0126 00:08:46.417084 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.417472 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.417492 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:46 crc kubenswrapper[5124]: I0126 00:08:46.417500 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:46 crc kubenswrapper[5124]: E0126 00:08:46.417725 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.245027 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.421094 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2d6d8389d6d15bd747b8ef74dc30f010429f962e34fe75b84935720929eab5ed"} Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.421188 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.421207 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.421316 5124 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.421373 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.421998 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.422029 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.422002 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.422062 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.422073 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.422043 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[5124]: E0126 00:08:47.422259 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.422498 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.422541 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.422558 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[5124]: E0126 00:08:47.422637 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:47 crc kubenswrapper[5124]: E0126 00:08:47.422989 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.608457 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.890899 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.891415 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.892439 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.892516 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[5124]: I0126 00:08:47.892543 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[5124]: E0126 00:08:47.893260 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:48 crc kubenswrapper[5124]: I0126 00:08:48.422989 5124 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 00:08:48 crc kubenswrapper[5124]: I0126 00:08:48.423042 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:48 crc kubenswrapper[5124]: I0126 00:08:48.423006 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:48 crc kubenswrapper[5124]: I0126 00:08:48.423904 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[5124]: I0126 00:08:48.423948 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[5124]: I0126 00:08:48.423965 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[5124]: E0126 00:08:48.424411 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:48 crc kubenswrapper[5124]: I0126 00:08:48.424655 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[5124]: I0126 00:08:48.424676 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[5124]: I0126 00:08:48.424687 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[5124]: E0126 00:08:48.425049 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.089862 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.090045 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.091668 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.091712 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.091723 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[5124]: E0126 00:08:50.092183 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.877244 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.877561 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.878524 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.878563 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.878576 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[5124]: E0126 00:08:50.878934 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.988469 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.988964 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.989900 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.989931 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[5124]: I0126 00:08:50.989943 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[5124]: E0126 00:08:50.990405 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:51 crc kubenswrapper[5124]: I0126 00:08:51.187317 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:51 crc kubenswrapper[5124]: I0126 00:08:51.187741 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:51 crc kubenswrapper[5124]: I0126 00:08:51.188842 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:51 crc kubenswrapper[5124]: I0126 00:08:51.188905 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:51 crc kubenswrapper[5124]: I0126 00:08:51.188919 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:51 crc kubenswrapper[5124]: E0126 00:08:51.189323 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:51 crc kubenswrapper[5124]: I0126 00:08:51.194494 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:51 crc kubenswrapper[5124]: I0126 00:08:51.365962 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:08:51 crc kubenswrapper[5124]: I0126 00:08:51.430075 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:51 crc kubenswrapper[5124]: I0126 00:08:51.431141 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:51 crc kubenswrapper[5124]: I0126 00:08:51.431211 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:51 crc kubenswrapper[5124]: I0126 00:08:51.431231 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:51 crc kubenswrapper[5124]: E0126 00:08:51.431863 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:52 crc kubenswrapper[5124]: E0126 00:08:52.420690 5124 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:08:52 crc kubenswrapper[5124]: I0126 00:08:52.431654 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:52 crc kubenswrapper[5124]: I0126 00:08:52.432264 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:52 crc kubenswrapper[5124]: I0126 00:08:52.432342 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:52 crc kubenswrapper[5124]: I0126 00:08:52.432389 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:52 crc kubenswrapper[5124]: E0126 00:08:52.433047 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:53 crc kubenswrapper[5124]: I0126 00:08:53.528742 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 00:08:53 crc kubenswrapper[5124]: I0126 00:08:53.528974 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:53 crc kubenswrapper[5124]: I0126 00:08:53.529707 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:53 crc kubenswrapper[5124]: I0126 00:08:53.529749 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:53 crc kubenswrapper[5124]: I0126 00:08:53.529762 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:53 crc kubenswrapper[5124]: E0126 00:08:53.530175 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:54 crc kubenswrapper[5124]: I0126 00:08:54.296136 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 00:08:54 crc kubenswrapper[5124]: E0126 00:08:54.341011 5124 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 26 00:08:54 crc kubenswrapper[5124]: I0126 00:08:54.366639 5124 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:08:54 crc kubenswrapper[5124]: I0126 00:08:54.366714 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:08:54 crc kubenswrapper[5124]: I0126 00:08:54.798408 5124 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 00:08:54 crc kubenswrapper[5124]: I0126 00:08:54.798467 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 00:08:54 crc kubenswrapper[5124]: I0126 00:08:54.804303 5124 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 00:08:54 crc kubenswrapper[5124]: I0126 00:08:54.804352 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 00:08:55 crc kubenswrapper[5124]: E0126 00:08:55.315674 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 26 00:08:56 crc kubenswrapper[5124]: I0126 00:08:56.419222 5124 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 00:08:56 crc kubenswrapper[5124]: I0126 00:08:56.419302 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 00:08:57 crc kubenswrapper[5124]: I0126 00:08:57.614562 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:57 crc kubenswrapper[5124]: I0126 00:08:57.614773 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:57 crc kubenswrapper[5124]: I0126 00:08:57.615436 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:57 crc kubenswrapper[5124]: I0126 00:08:57.615458 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:57 crc kubenswrapper[5124]: I0126 00:08:57.615469 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:57 crc kubenswrapper[5124]: E0126 00:08:57.615753 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:57 crc kubenswrapper[5124]: I0126 00:08:57.615865 5124 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 00:08:57 crc kubenswrapper[5124]: I0126 00:08:57.615923 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 00:08:57 crc kubenswrapper[5124]: I0126 00:08:57.618777 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:58 crc kubenswrapper[5124]: I0126 00:08:58.444245 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:08:58 crc kubenswrapper[5124]: I0126 00:08:58.444617 5124 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 00:08:58 crc kubenswrapper[5124]: I0126 00:08:58.444667 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 00:08:58 crc kubenswrapper[5124]: I0126 00:08:58.445674 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[5124]: I0126 00:08:58.445705 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[5124]: I0126 00:08:58.445717 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[5124]: E0126 00:08:58.446006 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:08:58 crc kubenswrapper[5124]: I0126 00:08:58.475171 5124 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:08:58 crc kubenswrapper[5124]: I0126 00:08:58.488695 5124 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 26 00:08:58 crc kubenswrapper[5124]: E0126 00:08:58.521201 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Jan 26 00:08:59 crc kubenswrapper[5124]: I0126 00:08:59.799936 5124 trace.go:236] Trace[1879366386]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:08:45.665) (total time: 14134ms): Jan 26 00:08:59 crc kubenswrapper[5124]: Trace[1879366386]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 14134ms (00:08:59.799) Jan 26 00:08:59 crc kubenswrapper[5124]: Trace[1879366386]: [14.134735666s] [14.134735666s] END Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.799973 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:08:59 crc kubenswrapper[5124]: I0126 00:08:59.800035 5124 trace.go:236] Trace[1710456489]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:08:46.109) (total time: 13690ms): Jan 26 00:08:59 crc kubenswrapper[5124]: Trace[1710456489]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 13690ms (00:08:59.799) Jan 26 00:08:59 crc kubenswrapper[5124]: Trace[1710456489]: [13.690322109s] [13.690322109s] END Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.800021 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dba986272 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.306675314 +0000 UTC m=+0.215594673,LastTimestamp:2026-01-26 00:08:42.306675314 +0000 UTC m=+0.215594673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.800103 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:08:59 crc kubenswrapper[5124]: I0126 00:08:59.800191 5124 trace.go:236] Trace[1559575919]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:08:46.360) (total time: 13439ms): Jan 26 00:08:59 crc kubenswrapper[5124]: Trace[1559575919]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 13439ms (00:08:59.800) Jan 26 00:08:59 crc kubenswrapper[5124]: Trace[1559575919]: [13.439694801s] [13.439694801s] END Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.800219 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:08:59 crc kubenswrapper[5124]: I0126 00:08:59.800402 5124 trace.go:236] Trace[1714954878]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:08:45.710) (total time: 14089ms): Jan 26 00:08:59 crc kubenswrapper[5124]: Trace[1714954878]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 14089ms (00:08:59.800) Jan 26 00:08:59 crc kubenswrapper[5124]: Trace[1714954878]: [14.089572445s] [14.089572445s] END Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.800452 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.804655 5124 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.804654 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9d5d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.357333299 +0000 UTC m=+0.266252648,LastTimestamp:2026-01-26 00:08:42.357333299 +0000 UTC m=+0.266252648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.806313 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9dafb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35735442 +0000 UTC m=+0.266273769,LastTimestamp:2026-01-26 00:08:42.35735442 +0000 UTC m=+0.266273769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.811540 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9ddbe2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35736573 +0000 UTC m=+0.266285079,LastTimestamp:2026-01-26 00:08:42.35736573 +0000 UTC m=+0.266285079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.816302 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dc144da6f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.418641519 +0000 UTC m=+0.327560878,LastTimestamp:2026-01-26 00:08:42.418641519 +0000 UTC m=+0.327560878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.823833 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9d5d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9d5d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.357333299 +0000 UTC m=+0.266252648,LastTimestamp:2026-01-26 00:08:42.465945568 +0000 UTC m=+0.374864917,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.828780 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9dafb4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9dafb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35735442 +0000 UTC m=+0.266273769,LastTimestamp:2026-01-26 00:08:42.465966979 +0000 UTC m=+0.374886328,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.835351 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9ddbe2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9ddbe2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35736573 +0000 UTC m=+0.266285079,LastTimestamp:2026-01-26 00:08:42.465977209 +0000 UTC m=+0.374896558,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.842794 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9d5d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9d5d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.357333299 +0000 UTC m=+0.266252648,LastTimestamp:2026-01-26 00:08:42.469317865 +0000 UTC m=+0.378237214,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.848398 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9dafb4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9dafb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35735442 +0000 UTC m=+0.266273769,LastTimestamp:2026-01-26 00:08:42.469337175 +0000 UTC m=+0.378256524,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.853412 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9ddbe2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9ddbe2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35736573 +0000 UTC m=+0.266285079,LastTimestamp:2026-01-26 00:08:42.469349586 +0000 UTC m=+0.378268945,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.859411 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9d5d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9d5d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.357333299 +0000 UTC m=+0.266252648,LastTimestamp:2026-01-26 00:08:42.469636759 +0000 UTC m=+0.378556108,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.864017 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9dafb4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9dafb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35735442 +0000 UTC m=+0.266273769,LastTimestamp:2026-01-26 00:08:42.46966199 +0000 UTC m=+0.378581339,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.868782 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9ddbe2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9ddbe2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35736573 +0000 UTC m=+0.266285079,LastTimestamp:2026-01-26 00:08:42.469673971 +0000 UTC m=+0.378593320,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.872564 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9d5d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9d5d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.357333299 +0000 UTC m=+0.266252648,LastTimestamp:2026-01-26 00:08:42.472181918 +0000 UTC m=+0.381101267,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.876488 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9dafb4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9dafb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35735442 +0000 UTC m=+0.266273769,LastTimestamp:2026-01-26 00:08:42.47222347 +0000 UTC m=+0.381142819,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.881069 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9ddbe2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9ddbe2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35736573 +0000 UTC m=+0.266285079,LastTimestamp:2026-01-26 00:08:42.47223622 +0000 UTC m=+0.381155569,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.886873 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9d5d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9d5d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.357333299 +0000 UTC m=+0.266252648,LastTimestamp:2026-01-26 00:08:42.472714192 +0000 UTC m=+0.381633541,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.891003 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9dafb4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9dafb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35735442 +0000 UTC m=+0.266273769,LastTimestamp:2026-01-26 00:08:42.472734263 +0000 UTC m=+0.381653602,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.895091 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9ddbe2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9ddbe2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35736573 +0000 UTC m=+0.266285079,LastTimestamp:2026-01-26 00:08:42.472743414 +0000 UTC m=+0.381662763,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.899801 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9d5d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9d5d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.357333299 +0000 UTC m=+0.266252648,LastTimestamp:2026-01-26 00:08:42.474028754 +0000 UTC m=+0.382948103,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.904834 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9d5d33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9d5d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.357333299 +0000 UTC m=+0.266252648,LastTimestamp:2026-01-26 00:08:42.474051435 +0000 UTC m=+0.382970794,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.907184 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9dafb4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9dafb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35735442 +0000 UTC m=+0.266273769,LastTimestamp:2026-01-26 00:08:42.474074056 +0000 UTC m=+0.382993405,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.909352 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9ddbe2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9ddbe2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35736573 +0000 UTC m=+0.266285079,LastTimestamp:2026-01-26 00:08:42.474088237 +0000 UTC m=+0.383007586,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.913371 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f4dbd9dafb4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f4dbd9dafb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.35735442 +0000 UTC m=+0.266273769,LastTimestamp:2026-01-26 00:08:42.474109178 +0000 UTC m=+0.383028527,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.920058 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f4dd9d410bd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.830680253 +0000 UTC m=+0.739599602,LastTimestamp:2026-01-26 00:08:42.830680253 +0000 UTC m=+0.739599602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.924734 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4dda314024 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.836787236 +0000 UTC m=+0.745706585,LastTimestamp:2026-01-26 00:08:42.836787236 +0000 UTC m=+0.745706585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.930481 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4ddb56a2a4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.8560145 +0000 UTC m=+0.764933839,LastTimestamp:2026-01-26 00:08:42.8560145 +0000 UTC m=+0.764933839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.938163 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f4ddb58724c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.856133196 +0000 UTC m=+0.765052545,LastTimestamp:2026-01-26 00:08:42.856133196 +0000 UTC m=+0.765052545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.943939 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f4ddbfad28e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:42.86677467 +0000 UTC m=+0.775694019,LastTimestamp:2026-01-26 00:08:42.86677467 +0000 UTC m=+0.775694019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.949907 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f4df738af90 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.323813776 +0000 UTC m=+1.232733125,LastTimestamp:2026-01-26 00:08:43.323813776 +0000 UTC m=+1.232733125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.955271 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4df73df71a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.32415977 +0000 UTC m=+1.233079119,LastTimestamp:2026-01-26 00:08:43.32415977 +0000 UTC m=+1.233079119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.959770 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4df73fadfc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.324272124 +0000 UTC m=+1.233191483,LastTimestamp:2026-01-26 00:08:43.324272124 +0000 UTC m=+1.233191483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.968543 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f4df7434dcd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.324509645 +0000 UTC m=+1.233428994,LastTimestamp:2026-01-26 00:08:43.324509645 +0000 UTC m=+1.233428994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.973346 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f4df74c92cc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.325117132 +0000 UTC m=+1.234036481,LastTimestamp:2026-01-26 00:08:43.325117132 +0000 UTC m=+1.234036481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.982326 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f4df7e891c9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.335340489 +0000 UTC m=+1.244259838,LastTimestamp:2026-01-26 00:08:43.335340489 +0000 UTC m=+1.244259838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.988171 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f4df7f9cfc9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.336470473 +0000 UTC m=+1.245389822,LastTimestamp:2026-01-26 00:08:43.336470473 +0000 UTC m=+1.245389822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.993965 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f4df805f8ec openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.337267436 +0000 UTC m=+1.246186815,LastTimestamp:2026-01-26 00:08:43.337267436 +0000 UTC m=+1.246186815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:08:59 crc kubenswrapper[5124]: E0126 00:08:59.999345 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4df80d2cb6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.337739446 +0000 UTC m=+1.246658795,LastTimestamp:2026-01-26 00:08:43.337739446 +0000 UTC m=+1.246658795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.005288 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f4df826f0fb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.339428091 +0000 UTC m=+1.248347460,LastTimestamp:2026-01-26 00:08:43.339428091 +0000 UTC m=+1.248347460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.014475 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4df842274d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.341211469 +0000 UTC m=+1.250130818,LastTimestamp:2026-01-26 00:08:43.341211469 +0000 UTC m=+1.250130818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.019347 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4dfa38c9d6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.37415215 +0000 UTC m=+1.283071499,LastTimestamp:2026-01-26 00:08:43.37415215 +0000 UTC m=+1.283071499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.024256 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4dfa55b26e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.376046702 +0000 UTC m=+1.284966051,LastTimestamp:2026-01-26 00:08:43.376046702 +0000 UTC m=+1.284966051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.031529 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f4dfaabe17c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.381694844 +0000 UTC m=+1.290614213,LastTimestamp:2026-01-26 00:08:43.381694844 +0000 UTC m=+1.290614213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.045566 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e0a4932f0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.643663088 +0000 UTC m=+1.552582437,LastTimestamp:2026-01-26 00:08:43.643663088 +0000 UTC m=+1.552582437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.051898 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f4e0a4d49ca openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.643931082 +0000 UTC m=+1.552850431,LastTimestamp:2026-01-26 00:08:43.643931082 +0000 UTC m=+1.552850431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.056679 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f4e0a4e5387 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.643999111 +0000 UTC m=+1.552918460,LastTimestamp:2026-01-26 00:08:43.643999111 +0000 UTC m=+1.552918460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.061922 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e0a4e9b41 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.644017473 +0000 UTC m=+1.552936822,LastTimestamp:2026-01-26 00:08:43.644017473 +0000 UTC m=+1.552936822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.066550 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f4e0aeadc4e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.654257742 +0000 UTC m=+1.563177091,LastTimestamp:2026-01-26 00:08:43.654257742 +0000 UTC m=+1.563177091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.071952 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f4e0b004c09 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.655662601 +0000 UTC m=+1.564581950,LastTimestamp:2026-01-26 00:08:43.655662601 +0000 UTC m=+1.564581950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.076066 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e0b16f9be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.657148862 +0000 UTC m=+1.566068211,LastTimestamp:2026-01-26 00:08:43.657148862 +0000 UTC m=+1.566068211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.080688 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f4e0b1d6469 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.657569385 +0000 UTC m=+1.566488734,LastTimestamp:2026-01-26 00:08:43.657569385 +0000 UTC m=+1.566488734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.084653 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e0b216f8b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.657834379 +0000 UTC m=+1.566753728,LastTimestamp:2026-01-26 00:08:43.657834379 +0000 UTC m=+1.566753728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.088456 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e0b2c91d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.658564052 +0000 UTC m=+1.567483401,LastTimestamp:2026-01-26 00:08:43.658564052 +0000 UTC m=+1.567483401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.093049 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f4e1f530a6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:43.996629615 +0000 UTC m=+1.905548964,LastTimestamp:2026-01-26 00:08:43.996629615 +0000 UTC m=+1.905548964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.096859 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e1f9a3029 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.001292329 +0000 UTC m=+1.910211678,LastTimestamp:2026-01-26 00:08:44.001292329 +0000 UTC m=+1.910211678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.100874 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f4e200d3865 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.008831077 +0000 UTC m=+1.917750426,LastTimestamp:2026-01-26 00:08:44.008831077 +0000 UTC m=+1.917750426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.105534 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f4e201e4551 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.009948497 +0000 UTC m=+1.918867846,LastTimestamp:2026-01-26 00:08:44.009948497 +0000 UTC m=+1.918867846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.109234 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e20391b2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.011707183 +0000 UTC m=+1.920626522,LastTimestamp:2026-01-26 00:08:44.011707183 +0000 UTC m=+1.920626522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.114283 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e20509bde openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.013247454 +0000 UTC m=+1.922166803,LastTimestamp:2026-01-26 00:08:44.013247454 +0000 UTC m=+1.922166803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.118922 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f4e366d286e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.384217198 +0000 UTC m=+2.293136547,LastTimestamp:2026-01-26 00:08:44.384217198 +0000 UTC m=+2.293136547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.121164 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e36a9d575 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.388193653 +0000 UTC m=+2.297113002,LastTimestamp:2026-01-26 00:08:44.388193653 +0000 UTC m=+2.297113002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.124465 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f4e37edc2bc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.409422524 +0000 UTC m=+2.318341873,LastTimestamp:2026-01-26 00:08:44.409422524 +0000 UTC m=+2.318341873,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.128466 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e37f9f746 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.410222406 +0000 UTC m=+2.319141755,LastTimestamp:2026-01-26 00:08:44.410222406 +0000 UTC m=+2.319141755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.132744 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f4e38bcbd69 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.422987113 +0000 UTC m=+2.331906462,LastTimestamp:2026-01-26 00:08:44.422987113 +0000 UTC m=+2.331906462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.136631 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e3963cf72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.433936242 +0000 UTC m=+2.342855591,LastTimestamp:2026-01-26 00:08:44.433936242 +0000 UTC m=+2.342855591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.140800 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e399e97e3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.437788643 +0000 UTC m=+2.346707992,LastTimestamp:2026-01-26 00:08:44.437788643 +0000 UTC m=+2.346707992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.145121 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f4e456a2cdc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.635679964 +0000 UTC m=+2.544599313,LastTimestamp:2026-01-26 00:08:44.635679964 +0000 UTC m=+2.544599313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.149176 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e456ff20b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.636058123 +0000 UTC m=+2.544977472,LastTimestamp:2026-01-26 00:08:44.636058123 +0000 UTC m=+2.544977472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.153610 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f4e4609a81c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.64613174 +0000 UTC m=+2.555051089,LastTimestamp:2026-01-26 00:08:44.64613174 +0000 UTC m=+2.555051089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.157677 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e46104a83 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.646566531 +0000 UTC m=+2.555485880,LastTimestamp:2026-01-26 00:08:44.646566531 +0000 UTC m=+2.555485880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.161716 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f4e461747bd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.647024573 +0000 UTC m=+2.555943912,LastTimestamp:2026-01-26 00:08:44.647024573 +0000 UTC m=+2.555943912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.166149 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e4620188e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.647602318 +0000 UTC m=+2.556521667,LastTimestamp:2026-01-26 00:08:44.647602318 +0000 UTC m=+2.556521667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.172253 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e47122ed0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.663467728 +0000 UTC m=+2.572387077,LastTimestamp:2026-01-26 00:08:44.663467728 +0000 UTC m=+2.572387077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.177676 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e487b16d3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.687120083 +0000 UTC m=+2.596039432,LastTimestamp:2026-01-26 00:08:44.687120083 +0000 UTC m=+2.596039432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.183299 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e51800314 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.838437652 +0000 UTC m=+2.747357001,LastTimestamp:2026-01-26 00:08:44.838437652 +0000 UTC m=+2.747357001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.188443 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f4e518db42c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.839334956 +0000 UTC m=+2.748254305,LastTimestamp:2026-01-26 00:08:44.839334956 +0000 UTC m=+2.748254305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.194568 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e51f29d32 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.84594821 +0000 UTC m=+2.754867559,LastTimestamp:2026-01-26 00:08:44.84594821 +0000 UTC m=+2.754867559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.199850 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f4e5257b70e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.852573966 +0000 UTC m=+2.761493315,LastTimestamp:2026-01-26 00:08:44.852573966 +0000 UTC m=+2.761493315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.205367 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f4e526a622a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.853797418 +0000 UTC m=+2.762716767,LastTimestamp:2026-01-26 00:08:44.853797418 +0000 UTC m=+2.762716767,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.209670 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f4e5bb9dc30 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:45.010000944 +0000 UTC m=+2.918920293,LastTimestamp:2026-01-26 00:08:45.010000944 +0000 UTC m=+2.918920293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.214977 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f4e5c31b8ad openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:45.017856173 +0000 UTC m=+2.926775522,LastTimestamp:2026-01-26 00:08:45.017856173 +0000 UTC m=+2.926775522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.220353 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e736037bc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:45.406779324 +0000 UTC m=+3.315698673,LastTimestamp:2026-01-26 00:08:45.406779324 +0000 UTC m=+3.315698673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.225176 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e8426c140 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:45.688226112 +0000 UTC m=+3.597145481,LastTimestamp:2026-01-26 00:08:45.688226112 +0000 UTC m=+3.597145481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.229352 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e84da3484 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:45.699986564 +0000 UTC m=+3.608905923,LastTimestamp:2026-01-26 00:08:45.699986564 +0000 UTC m=+3.608905923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.233551 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e84e73f00 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:45.700841216 +0000 UTC m=+3.609760555,LastTimestamp:2026-01-26 00:08:45.700841216 +0000 UTC m=+3.609760555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.240550 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e9069548a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:45.893915786 +0000 UTC m=+3.802835135,LastTimestamp:2026-01-26 00:08:45.893915786 +0000 UTC m=+3.802835135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.245024 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e9118fd8b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:45.905427851 +0000 UTC m=+3.814347190,LastTimestamp:2026-01-26 00:08:45.905427851 +0000 UTC m=+3.814347190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.249192 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e91290798 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:45.906479 +0000 UTC m=+3.815398349,LastTimestamp:2026-01-26 00:08:45.906479 +0000 UTC m=+3.815398349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.253261 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e9a8e981e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:46.064130078 +0000 UTC m=+3.973049427,LastTimestamp:2026-01-26 00:08:46.064130078 +0000 UTC m=+3.973049427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.257016 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e9b5666b2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:46.077224626 +0000 UTC m=+3.986143975,LastTimestamp:2026-01-26 00:08:46.077224626 +0000 UTC m=+3.986143975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.260729 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4e9b6430fc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:46.07812838 +0000 UTC m=+3.987047729,LastTimestamp:2026-01-26 00:08:46.07812838 +0000 UTC m=+3.987047729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.266727 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4ea98f90bd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:46.315851965 +0000 UTC m=+4.224771334,LastTimestamp:2026-01-26 00:08:46.315851965 +0000 UTC m=+4.224771334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.270601 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4eaa324ff4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:46.326517748 +0000 UTC m=+4.235437107,LastTimestamp:2026-01-26 00:08:46.326517748 +0000 UTC m=+4.235437107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.274269 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4eaa45c4de openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:46.327792862 +0000 UTC m=+4.236712211,LastTimestamp:2026-01-26 00:08:46.327792862 +0000 UTC m=+4.236712211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.278328 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4eb47e515f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:46.499271007 +0000 UTC m=+4.408190356,LastTimestamp:2026-01-26 00:08:46.499271007 +0000 UTC m=+4.408190356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.282715 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f4eb5394605 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:46.511523333 +0000 UTC m=+4.420442682,LastTimestamp:2026-01-26 00:08:46.511523333 +0000 UTC m=+4.420442682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.288145 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 26 00:09:00 crc kubenswrapper[5124]: &Event{ObjectMeta:{kube-controller-manager-crc.188e1f50896d910b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 26 00:09:00 crc kubenswrapper[5124]: body: Jan 26 00:09:00 crc kubenswrapper[5124]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:54.366687499 +0000 UTC m=+12.275606838,LastTimestamp:2026-01-26 00:08:54.366687499 +0000 UTC m=+12.275606838,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:00 crc kubenswrapper[5124]: > Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.291744 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f50896eb9a7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:54.366763431 +0000 UTC m=+12.275682780,LastTimestamp:2026-01-26 00:08:54.366763431 +0000 UTC m=+12.275682780,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.295377 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:00 crc kubenswrapper[5124]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f50a329ba78 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 26 00:09:00 crc kubenswrapper[5124]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 00:09:00 crc kubenswrapper[5124]: Jan 26 00:09:00 crc kubenswrapper[5124]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:54.798449272 +0000 UTC m=+12.707368621,LastTimestamp:2026-01-26 00:08:54.798449272 +0000 UTC m=+12.707368621,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:00 crc kubenswrapper[5124]: > Jan 26 00:09:00 crc kubenswrapper[5124]: I0126 00:09:00.300543 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.301811 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f50a32a4b29 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:54.798486313 +0000 UTC m=+12.707405662,LastTimestamp:2026-01-26 00:08:54.798486313 +0000 UTC m=+12.707405662,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.303722 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f50a329ba78\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:00 crc kubenswrapper[5124]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f50a329ba78 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 26 00:09:00 crc kubenswrapper[5124]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 00:09:00 crc kubenswrapper[5124]: Jan 26 00:09:00 crc kubenswrapper[5124]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:54.798449272 +0000 UTC m=+12.707368621,LastTimestamp:2026-01-26 00:08:54.804336307 +0000 UTC m=+12.713255656,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:00 crc kubenswrapper[5124]: > Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.305979 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f50a32a4b29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f50a32a4b29 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:54.798486313 +0000 UTC m=+12.707405662,LastTimestamp:2026-01-26 00:08:54.804386999 +0000 UTC m=+12.713306348,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.308954 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:00 crc kubenswrapper[5124]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f5103c59804 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 26 00:09:00 crc kubenswrapper[5124]: body: Jan 26 00:09:00 crc kubenswrapper[5124]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:56.419276804 +0000 UTC m=+14.328196153,LastTimestamp:2026-01-26 00:08:56.419276804 +0000 UTC m=+14.328196153,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:00 crc kubenswrapper[5124]: > Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.316269 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5103c652c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:56.419324615 +0000 UTC m=+14.328243964,LastTimestamp:2026-01-26 00:08:56.419324615 +0000 UTC m=+14.328243964,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.321961 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:00 crc kubenswrapper[5124]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f514b18a385 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 26 00:09:00 crc kubenswrapper[5124]: body: Jan 26 00:09:00 crc kubenswrapper[5124]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:57.615901573 +0000 UTC m=+15.524820922,LastTimestamp:2026-01-26 00:08:57.615901573 +0000 UTC m=+15.524820922,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:00 crc kubenswrapper[5124]: > Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.326639 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f514b19533b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:57.615946555 +0000 UTC m=+15.524865904,LastTimestamp:2026-01-26 00:08:57.615946555 +0000 UTC m=+15.524865904,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.330954 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f514b18a385\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:00 crc kubenswrapper[5124]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f514b18a385 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 26 00:09:00 crc kubenswrapper[5124]: body: Jan 26 00:09:00 crc kubenswrapper[5124]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:57.615901573 +0000 UTC m=+15.524820922,LastTimestamp:2026-01-26 00:08:58.444649989 +0000 UTC m=+16.353569358,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:00 crc kubenswrapper[5124]: > Jan 26 00:09:00 crc kubenswrapper[5124]: E0126 00:09:00.336563 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f514b19533b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f514b19533b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:57.615946555 +0000 UTC m=+15.524865904,LastTimestamp:2026-01-26 00:08:58.44469084 +0000 UTC m=+16.353610189,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.300942 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.382868 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.383066 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.384080 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.384180 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.384208 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:01 crc kubenswrapper[5124]: E0126 00:09:01.384939 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.396321 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.402738 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.451940 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.453659 5124 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7cc17d906fe2ad8e3b9fa270994f12c6d3e8dda7bd0681854752228ac2ed2021" exitCode=255 Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.453743 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"7cc17d906fe2ad8e3b9fa270994f12c6d3e8dda7bd0681854752228ac2ed2021"} Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.453956 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.454021 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.454508 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.454538 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.454548 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.454570 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.454617 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.454633 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:01 crc kubenswrapper[5124]: E0126 00:09:01.454875 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:01 crc kubenswrapper[5124]: E0126 00:09:01.455084 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:01 crc kubenswrapper[5124]: I0126 00:09:01.455089 5124 scope.go:117] "RemoveContainer" containerID="7cc17d906fe2ad8e3b9fa270994f12c6d3e8dda7bd0681854752228ac2ed2021" Jan 26 00:09:01 crc kubenswrapper[5124]: E0126 00:09:01.461813 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f4e4620188e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e4620188e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.647602318 +0000 UTC m=+2.556521667,LastTimestamp:2026-01-26 00:09:01.457369936 +0000 UTC m=+19.366289285,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:01 crc kubenswrapper[5124]: E0126 00:09:01.680079 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f4e51800314\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e51800314 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.838437652 +0000 UTC m=+2.747357001,LastTimestamp:2026-01-26 00:09:01.675314953 +0000 UTC m=+19.584234302,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:01 crc kubenswrapper[5124]: E0126 00:09:01.690161 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f4e51f29d32\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e51f29d32 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.84594821 +0000 UTC m=+2.754867559,LastTimestamp:2026-01-26 00:09:01.683482879 +0000 UTC m=+19.592402228,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:02 crc kubenswrapper[5124]: I0126 00:09:02.300758 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:02 crc kubenswrapper[5124]: E0126 00:09:02.420896 5124 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:02 crc kubenswrapper[5124]: I0126 00:09:02.457783 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 26 00:09:02 crc kubenswrapper[5124]: I0126 00:09:02.459286 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"32d30fb4b561b5f82b3c1e7a76661218ab5417211df371cba8a3cdb0aa54e912"} Jan 26 00:09:02 crc kubenswrapper[5124]: I0126 00:09:02.459413 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:02 crc kubenswrapper[5124]: I0126 00:09:02.459522 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:02 crc kubenswrapper[5124]: I0126 00:09:02.460187 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:02 crc kubenswrapper[5124]: I0126 00:09:02.460215 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:02 crc kubenswrapper[5124]: I0126 00:09:02.460194 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:02 crc kubenswrapper[5124]: I0126 00:09:02.460243 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:02 crc kubenswrapper[5124]: I0126 00:09:02.460253 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:02 crc kubenswrapper[5124]: I0126 00:09:02.460226 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:02 crc kubenswrapper[5124]: E0126 00:09:02.460637 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:02 crc kubenswrapper[5124]: E0126 00:09:02.460844 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.005816 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.006702 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.006922 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.007009 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.007101 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:03 crc kubenswrapper[5124]: E0126 00:09:03.015010 5124 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:03 crc kubenswrapper[5124]: E0126 00:09:03.130425 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.319295 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.463009 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.463544 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.465169 5124 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="32d30fb4b561b5f82b3c1e7a76661218ab5417211df371cba8a3cdb0aa54e912" exitCode=255 Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.465250 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"32d30fb4b561b5f82b3c1e7a76661218ab5417211df371cba8a3cdb0aa54e912"} Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.465353 5124 scope.go:117] "RemoveContainer" containerID="7cc17d906fe2ad8e3b9fa270994f12c6d3e8dda7bd0681854752228ac2ed2021" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.465485 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.465918 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.465947 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.465956 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5124]: E0126 00:09:03.466237 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.466450 5124 scope.go:117] "RemoveContainer" containerID="32d30fb4b561b5f82b3c1e7a76661218ab5417211df371cba8a3cdb0aa54e912" Jan 26 00:09:03 crc kubenswrapper[5124]: E0126 00:09:03.466655 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:03 crc kubenswrapper[5124]: E0126 00:09:03.471266 5124 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f52a7d3908b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.466623115 +0000 UTC m=+21.375542464,LastTimestamp:2026-01-26 00:09:03.466623115 +0000 UTC m=+21.375542464,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.554634 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.554918 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.556910 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.556978 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.556993 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5124]: E0126 00:09:03.558238 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:03 crc kubenswrapper[5124]: I0126 00:09:03.570150 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5124]: E0126 00:09:03.823731 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:04 crc kubenswrapper[5124]: I0126 00:09:04.299692 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:04 crc kubenswrapper[5124]: I0126 00:09:04.468738 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:09:04 crc kubenswrapper[5124]: I0126 00:09:04.471753 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:04 crc kubenswrapper[5124]: I0126 00:09:04.472452 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[5124]: I0126 00:09:04.472490 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[5124]: I0126 00:09:04.472502 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[5124]: E0126 00:09:04.472838 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:04 crc kubenswrapper[5124]: E0126 00:09:04.570749 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:04 crc kubenswrapper[5124]: E0126 00:09:04.925981 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:05 crc kubenswrapper[5124]: I0126 00:09:05.304030 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:05 crc kubenswrapper[5124]: E0126 00:09:05.925156 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:06 crc kubenswrapper[5124]: I0126 00:09:06.300117 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:06 crc kubenswrapper[5124]: I0126 00:09:06.418264 5124 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:06 crc kubenswrapper[5124]: I0126 00:09:06.418453 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:06 crc kubenswrapper[5124]: I0126 00:09:06.419118 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[5124]: I0126 00:09:06.419148 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[5124]: I0126 00:09:06.419160 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[5124]: E0126 00:09:06.419463 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:06 crc kubenswrapper[5124]: I0126 00:09:06.419703 5124 scope.go:117] "RemoveContainer" containerID="32d30fb4b561b5f82b3c1e7a76661218ab5417211df371cba8a3cdb0aa54e912" Jan 26 00:09:06 crc kubenswrapper[5124]: E0126 00:09:06.419876 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:06 crc kubenswrapper[5124]: E0126 00:09:06.424070 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f52a7d3908b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f52a7d3908b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.466623115 +0000 UTC m=+21.375542464,LastTimestamp:2026-01-26 00:09:06.419851918 +0000 UTC m=+24.328771267,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:07 crc kubenswrapper[5124]: I0126 00:09:07.301053 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:08 crc kubenswrapper[5124]: I0126 00:09:08.301472 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:09 crc kubenswrapper[5124]: I0126 00:09:09.303848 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:09 crc kubenswrapper[5124]: I0126 00:09:09.415800 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:09 crc kubenswrapper[5124]: I0126 00:09:09.417188 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:09 crc kubenswrapper[5124]: I0126 00:09:09.417232 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:09 crc kubenswrapper[5124]: I0126 00:09:09.417244 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:09 crc kubenswrapper[5124]: I0126 00:09:09.417268 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:09 crc kubenswrapper[5124]: E0126 00:09:09.428224 5124 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:10 crc kubenswrapper[5124]: I0126 00:09:10.302656 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:11 crc kubenswrapper[5124]: I0126 00:09:11.296632 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:11 crc kubenswrapper[5124]: E0126 00:09:11.927797 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:12 crc kubenswrapper[5124]: I0126 00:09:12.299771 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:12 crc kubenswrapper[5124]: E0126 00:09:12.421311 5124 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:12 crc kubenswrapper[5124]: I0126 00:09:12.460408 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:12 crc kubenswrapper[5124]: I0126 00:09:12.460759 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:12 crc kubenswrapper[5124]: I0126 00:09:12.461713 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:12 crc kubenswrapper[5124]: I0126 00:09:12.461837 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:12 crc kubenswrapper[5124]: I0126 00:09:12.461908 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:12 crc kubenswrapper[5124]: E0126 00:09:12.462311 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:12 crc kubenswrapper[5124]: I0126 00:09:12.462609 5124 scope.go:117] "RemoveContainer" containerID="32d30fb4b561b5f82b3c1e7a76661218ab5417211df371cba8a3cdb0aa54e912" Jan 26 00:09:12 crc kubenswrapper[5124]: E0126 00:09:12.462875 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:12 crc kubenswrapper[5124]: E0126 00:09:12.470636 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f52a7d3908b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f52a7d3908b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.466623115 +0000 UTC m=+21.375542464,LastTimestamp:2026-01-26 00:09:12.462844697 +0000 UTC m=+30.371764036,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:13 crc kubenswrapper[5124]: I0126 00:09:13.302154 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:14 crc kubenswrapper[5124]: I0126 00:09:14.302676 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:15 crc kubenswrapper[5124]: I0126 00:09:15.304197 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:15 crc kubenswrapper[5124]: E0126 00:09:15.438887 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:16 crc kubenswrapper[5124]: E0126 00:09:16.266842 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:16 crc kubenswrapper[5124]: I0126 00:09:16.302843 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:16 crc kubenswrapper[5124]: I0126 00:09:16.429344 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5124]: I0126 00:09:16.430778 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5124]: I0126 00:09:16.430872 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5124]: I0126 00:09:16.430898 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5124]: I0126 00:09:16.430947 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:16 crc kubenswrapper[5124]: E0126 00:09:16.442752 5124 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:17 crc kubenswrapper[5124]: I0126 00:09:17.301149 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:17 crc kubenswrapper[5124]: E0126 00:09:17.366876 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:18 crc kubenswrapper[5124]: E0126 00:09:18.198193 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:18 crc kubenswrapper[5124]: I0126 00:09:18.304309 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:18 crc kubenswrapper[5124]: E0126 00:09:18.936895 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:19 crc kubenswrapper[5124]: I0126 00:09:19.300065 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:20 crc kubenswrapper[5124]: I0126 00:09:20.303689 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:21 crc kubenswrapper[5124]: I0126 00:09:21.305934 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:22 crc kubenswrapper[5124]: I0126 00:09:22.303399 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:22 crc kubenswrapper[5124]: E0126 00:09:22.422821 5124 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:23 crc kubenswrapper[5124]: I0126 00:09:23.303719 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:23 crc kubenswrapper[5124]: I0126 00:09:23.443176 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:23 crc kubenswrapper[5124]: I0126 00:09:23.444657 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:23 crc kubenswrapper[5124]: I0126 00:09:23.444738 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:23 crc kubenswrapper[5124]: I0126 00:09:23.444768 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:23 crc kubenswrapper[5124]: I0126 00:09:23.444822 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:23 crc kubenswrapper[5124]: E0126 00:09:23.459466 5124 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:24 crc kubenswrapper[5124]: I0126 00:09:24.302473 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:25 crc kubenswrapper[5124]: I0126 00:09:25.302902 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:25 crc kubenswrapper[5124]: E0126 00:09:25.944415 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:26 crc kubenswrapper[5124]: I0126 00:09:26.301154 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:26 crc kubenswrapper[5124]: I0126 00:09:26.365031 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:26 crc kubenswrapper[5124]: I0126 00:09:26.366142 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:26 crc kubenswrapper[5124]: I0126 00:09:26.366198 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:26 crc kubenswrapper[5124]: I0126 00:09:26.366218 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:26 crc kubenswrapper[5124]: E0126 00:09:26.366803 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:26 crc kubenswrapper[5124]: I0126 00:09:26.367232 5124 scope.go:117] "RemoveContainer" containerID="32d30fb4b561b5f82b3c1e7a76661218ab5417211df371cba8a3cdb0aa54e912" Jan 26 00:09:26 crc kubenswrapper[5124]: E0126 00:09:26.379727 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f4e4620188e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e4620188e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.647602318 +0000 UTC m=+2.556521667,LastTimestamp:2026-01-26 00:09:26.368918009 +0000 UTC m=+44.277837398,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:26 crc kubenswrapper[5124]: E0126 00:09:26.592328 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f4e51800314\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e51800314 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.838437652 +0000 UTC m=+2.747357001,LastTimestamp:2026-01-26 00:09:26.587218274 +0000 UTC m=+44.496137623,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:26 crc kubenswrapper[5124]: E0126 00:09:26.605306 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f4e51f29d32\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f4e51f29d32 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:08:44.84594821 +0000 UTC m=+2.754867559,LastTimestamp:2026-01-26 00:09:26.597689092 +0000 UTC m=+44.506608441,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:27 crc kubenswrapper[5124]: I0126 00:09:27.304998 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:27 crc kubenswrapper[5124]: I0126 00:09:27.540037 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:09:27 crc kubenswrapper[5124]: I0126 00:09:27.542623 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"54fe5054d12e34672ab0b7958239e2c701d787ac0ae96126038f4043c83abbec"} Jan 26 00:09:27 crc kubenswrapper[5124]: I0126 00:09:27.542891 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:27 crc kubenswrapper[5124]: I0126 00:09:27.543760 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:27 crc kubenswrapper[5124]: I0126 00:09:27.543797 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:27 crc kubenswrapper[5124]: I0126 00:09:27.543810 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:27 crc kubenswrapper[5124]: E0126 00:09:27.544184 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:28 crc kubenswrapper[5124]: I0126 00:09:28.299630 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:29 crc kubenswrapper[5124]: I0126 00:09:29.300220 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:29 crc kubenswrapper[5124]: I0126 00:09:29.550519 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:09:29 crc kubenswrapper[5124]: I0126 00:09:29.551160 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:09:29 crc kubenswrapper[5124]: I0126 00:09:29.552966 5124 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="54fe5054d12e34672ab0b7958239e2c701d787ac0ae96126038f4043c83abbec" exitCode=255 Jan 26 00:09:29 crc kubenswrapper[5124]: I0126 00:09:29.553044 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"54fe5054d12e34672ab0b7958239e2c701d787ac0ae96126038f4043c83abbec"} Jan 26 00:09:29 crc kubenswrapper[5124]: I0126 00:09:29.553099 5124 scope.go:117] "RemoveContainer" containerID="32d30fb4b561b5f82b3c1e7a76661218ab5417211df371cba8a3cdb0aa54e912" Jan 26 00:09:29 crc kubenswrapper[5124]: I0126 00:09:29.553253 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:29 crc kubenswrapper[5124]: I0126 00:09:29.553819 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:29 crc kubenswrapper[5124]: I0126 00:09:29.553850 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:29 crc kubenswrapper[5124]: I0126 00:09:29.553860 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:29 crc kubenswrapper[5124]: E0126 00:09:29.554120 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:29 crc kubenswrapper[5124]: I0126 00:09:29.554363 5124 scope.go:117] "RemoveContainer" containerID="54fe5054d12e34672ab0b7958239e2c701d787ac0ae96126038f4043c83abbec" Jan 26 00:09:29 crc kubenswrapper[5124]: E0126 00:09:29.554549 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:29 crc kubenswrapper[5124]: E0126 00:09:29.564398 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f52a7d3908b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f52a7d3908b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.466623115 +0000 UTC m=+21.375542464,LastTimestamp:2026-01-26 00:09:29.55451718 +0000 UTC m=+47.463436529,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:30 crc kubenswrapper[5124]: I0126 00:09:30.301178 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:30 crc kubenswrapper[5124]: I0126 00:09:30.459656 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5124]: I0126 00:09:30.460937 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5124]: I0126 00:09:30.461026 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5124]: I0126 00:09:30.461052 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5124]: I0126 00:09:30.461095 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:30 crc kubenswrapper[5124]: E0126 00:09:30.476976 5124 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:30 crc kubenswrapper[5124]: I0126 00:09:30.557499 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:09:31 crc kubenswrapper[5124]: E0126 00:09:31.102169 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:31 crc kubenswrapper[5124]: I0126 00:09:31.301212 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:32 crc kubenswrapper[5124]: E0126 00:09:32.059765 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:32 crc kubenswrapper[5124]: I0126 00:09:32.301784 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:32 crc kubenswrapper[5124]: E0126 00:09:32.423942 5124 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:32 crc kubenswrapper[5124]: E0126 00:09:32.480107 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:32 crc kubenswrapper[5124]: E0126 00:09:32.949309 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:33 crc kubenswrapper[5124]: I0126 00:09:33.300860 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:34 crc kubenswrapper[5124]: I0126 00:09:34.300861 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:35 crc kubenswrapper[5124]: I0126 00:09:35.303792 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:35 crc kubenswrapper[5124]: E0126 00:09:35.405437 5124 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:36 crc kubenswrapper[5124]: I0126 00:09:36.299821 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:36 crc kubenswrapper[5124]: I0126 00:09:36.418517 5124 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:36 crc kubenswrapper[5124]: I0126 00:09:36.418865 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:36 crc kubenswrapper[5124]: I0126 00:09:36.419753 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:36 crc kubenswrapper[5124]: I0126 00:09:36.419785 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:36 crc kubenswrapper[5124]: I0126 00:09:36.419794 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:36 crc kubenswrapper[5124]: E0126 00:09:36.420113 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:36 crc kubenswrapper[5124]: I0126 00:09:36.420406 5124 scope.go:117] "RemoveContainer" containerID="54fe5054d12e34672ab0b7958239e2c701d787ac0ae96126038f4043c83abbec" Jan 26 00:09:36 crc kubenswrapper[5124]: E0126 00:09:36.420604 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:36 crc kubenswrapper[5124]: E0126 00:09:36.425108 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f52a7d3908b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f52a7d3908b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.466623115 +0000 UTC m=+21.375542464,LastTimestamp:2026-01-26 00:09:36.420563034 +0000 UTC m=+54.329482383,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.299418 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.484546 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.485780 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.485927 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.486053 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.486243 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:37 crc kubenswrapper[5124]: E0126 00:09:37.499454 5124 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.543723 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.543959 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.544895 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.544943 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.544952 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:37 crc kubenswrapper[5124]: E0126 00:09:37.545318 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.545575 5124 scope.go:117] "RemoveContainer" containerID="54fe5054d12e34672ab0b7958239e2c701d787ac0ae96126038f4043c83abbec" Jan 26 00:09:37 crc kubenswrapper[5124]: E0126 00:09:37.545784 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:37 crc kubenswrapper[5124]: E0126 00:09:37.549872 5124 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f52a7d3908b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f52a7d3908b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.466623115 +0000 UTC m=+21.375542464,LastTimestamp:2026-01-26 00:09:37.545754327 +0000 UTC m=+55.454673676,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.896722 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.896886 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.897706 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.897856 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:37 crc kubenswrapper[5124]: I0126 00:09:37.897980 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:37 crc kubenswrapper[5124]: E0126 00:09:37.898407 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:38 crc kubenswrapper[5124]: I0126 00:09:38.299965 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:39 crc kubenswrapper[5124]: I0126 00:09:39.300782 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:39 crc kubenswrapper[5124]: E0126 00:09:39.954733 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:40 crc kubenswrapper[5124]: I0126 00:09:40.299827 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:41 crc kubenswrapper[5124]: I0126 00:09:41.302821 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:42 crc kubenswrapper[5124]: I0126 00:09:42.300780 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:42 crc kubenswrapper[5124]: E0126 00:09:42.424355 5124 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:43 crc kubenswrapper[5124]: I0126 00:09:43.303620 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:44 crc kubenswrapper[5124]: I0126 00:09:44.300348 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:44 crc kubenswrapper[5124]: I0126 00:09:44.499716 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:44 crc kubenswrapper[5124]: I0126 00:09:44.500866 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:44 crc kubenswrapper[5124]: I0126 00:09:44.501099 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:44 crc kubenswrapper[5124]: I0126 00:09:44.501311 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:44 crc kubenswrapper[5124]: I0126 00:09:44.501479 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:44 crc kubenswrapper[5124]: E0126 00:09:44.513311 5124 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:45 crc kubenswrapper[5124]: I0126 00:09:45.299900 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:46 crc kubenswrapper[5124]: I0126 00:09:46.300452 5124 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:46 crc kubenswrapper[5124]: E0126 00:09:46.960906 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:47 crc kubenswrapper[5124]: I0126 00:09:47.034223 5124 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-cfwhr" Jan 26 00:09:47 crc kubenswrapper[5124]: I0126 00:09:47.040267 5124 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-cfwhr" Jan 26 00:09:47 crc kubenswrapper[5124]: I0126 00:09:47.125761 5124 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 00:09:47 crc kubenswrapper[5124]: I0126 00:09:47.226864 5124 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 00:09:48 crc kubenswrapper[5124]: I0126 00:09:48.042359 5124 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-25 00:04:47 +0000 UTC" deadline="2026-02-17 10:00:56.481263492 +0000 UTC" Jan 26 00:09:48 crc kubenswrapper[5124]: I0126 00:09:48.042443 5124 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="537h51m8.438827073s" Jan 26 00:09:48 crc kubenswrapper[5124]: I0126 00:09:48.364677 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:48 crc kubenswrapper[5124]: I0126 00:09:48.365840 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:48 crc kubenswrapper[5124]: I0126 00:09:48.365920 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:48 crc kubenswrapper[5124]: I0126 00:09:48.365948 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:48 crc kubenswrapper[5124]: E0126 00:09:48.367075 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:48 crc kubenswrapper[5124]: I0126 00:09:48.367461 5124 scope.go:117] "RemoveContainer" containerID="54fe5054d12e34672ab0b7958239e2c701d787ac0ae96126038f4043c83abbec" Jan 26 00:09:48 crc kubenswrapper[5124]: E0126 00:09:48.367836 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.513733 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.514663 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.514718 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.514741 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.514881 5124 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.523412 5124 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.523763 5124 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 26 00:09:51 crc kubenswrapper[5124]: E0126 00:09:51.523797 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.526946 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.527041 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.527064 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.527158 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.527178 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:51Z","lastTransitionTime":"2026-01-26T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:51 crc kubenswrapper[5124]: E0126 00:09:51.541558 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.550047 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.550136 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.550158 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.550181 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.550200 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:51Z","lastTransitionTime":"2026-01-26T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:51 crc kubenswrapper[5124]: E0126 00:09:51.559247 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.566346 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.566452 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.566466 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.566485 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.566498 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:51Z","lastTransitionTime":"2026-01-26T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:51 crc kubenswrapper[5124]: E0126 00:09:51.575802 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.583254 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.583481 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.583498 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.583522 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:51 crc kubenswrapper[5124]: I0126 00:09:51.583537 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:51Z","lastTransitionTime":"2026-01-26T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:51 crc kubenswrapper[5124]: E0126 00:09:51.594302 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:09:51 crc kubenswrapper[5124]: E0126 00:09:51.594464 5124 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:09:51 crc kubenswrapper[5124]: E0126 00:09:51.594490 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:51 crc kubenswrapper[5124]: E0126 00:09:51.695129 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:51 crc kubenswrapper[5124]: E0126 00:09:51.796140 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:51 crc kubenswrapper[5124]: E0126 00:09:51.896951 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:51 crc kubenswrapper[5124]: E0126 00:09:51.997760 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:52 crc kubenswrapper[5124]: E0126 00:09:52.098058 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:52 crc kubenswrapper[5124]: E0126 00:09:52.199029 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:52 crc kubenswrapper[5124]: E0126 00:09:52.299849 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:52 crc kubenswrapper[5124]: E0126 00:09:52.400204 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:52 crc kubenswrapper[5124]: E0126 00:09:52.425174 5124 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:52 crc kubenswrapper[5124]: E0126 00:09:52.500361 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:52 crc kubenswrapper[5124]: E0126 00:09:52.600537 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:52 crc kubenswrapper[5124]: E0126 00:09:52.701573 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:52 crc kubenswrapper[5124]: E0126 00:09:52.802458 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:52 crc kubenswrapper[5124]: E0126 00:09:52.903229 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:53 crc kubenswrapper[5124]: E0126 00:09:53.004103 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:53 crc kubenswrapper[5124]: E0126 00:09:53.104981 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:53 crc kubenswrapper[5124]: E0126 00:09:53.205807 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:53 crc kubenswrapper[5124]: E0126 00:09:53.306194 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:53 crc kubenswrapper[5124]: E0126 00:09:53.406638 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:53 crc kubenswrapper[5124]: E0126 00:09:53.507815 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:53 crc kubenswrapper[5124]: E0126 00:09:53.608655 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:53 crc kubenswrapper[5124]: E0126 00:09:53.709697 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:53 crc kubenswrapper[5124]: E0126 00:09:53.810797 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:53 crc kubenswrapper[5124]: E0126 00:09:53.911643 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:54 crc kubenswrapper[5124]: E0126 00:09:54.012347 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:54 crc kubenswrapper[5124]: E0126 00:09:54.113030 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:54 crc kubenswrapper[5124]: E0126 00:09:54.213837 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:54 crc kubenswrapper[5124]: E0126 00:09:54.314134 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:54 crc kubenswrapper[5124]: E0126 00:09:54.414260 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:54 crc kubenswrapper[5124]: E0126 00:09:54.514486 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:54 crc kubenswrapper[5124]: E0126 00:09:54.614882 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:54 crc kubenswrapper[5124]: E0126 00:09:54.715714 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:54 crc kubenswrapper[5124]: E0126 00:09:54.816807 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:54 crc kubenswrapper[5124]: E0126 00:09:54.917786 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:55 crc kubenswrapper[5124]: E0126 00:09:55.018743 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:55 crc kubenswrapper[5124]: E0126 00:09:55.119580 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:55 crc kubenswrapper[5124]: E0126 00:09:55.220327 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:55 crc kubenswrapper[5124]: E0126 00:09:55.320700 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:55 crc kubenswrapper[5124]: E0126 00:09:55.421230 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:55 crc kubenswrapper[5124]: E0126 00:09:55.521963 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:55 crc kubenswrapper[5124]: E0126 00:09:55.623036 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:55 crc kubenswrapper[5124]: E0126 00:09:55.724108 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:55 crc kubenswrapper[5124]: E0126 00:09:55.824580 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:55 crc kubenswrapper[5124]: E0126 00:09:55.925306 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:56 crc kubenswrapper[5124]: E0126 00:09:56.026086 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:56 crc kubenswrapper[5124]: E0126 00:09:56.126237 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:56 crc kubenswrapper[5124]: E0126 00:09:56.227026 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:56 crc kubenswrapper[5124]: E0126 00:09:56.327475 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:56 crc kubenswrapper[5124]: E0126 00:09:56.428279 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:56 crc kubenswrapper[5124]: E0126 00:09:56.529429 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:56 crc kubenswrapper[5124]: E0126 00:09:56.629724 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:56 crc kubenswrapper[5124]: E0126 00:09:56.730059 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:56 crc kubenswrapper[5124]: E0126 00:09:56.830714 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:56 crc kubenswrapper[5124]: E0126 00:09:56.931490 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:57 crc kubenswrapper[5124]: E0126 00:09:57.032383 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:57 crc kubenswrapper[5124]: E0126 00:09:57.132740 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:57 crc kubenswrapper[5124]: E0126 00:09:57.233808 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:57 crc kubenswrapper[5124]: E0126 00:09:57.333927 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:57 crc kubenswrapper[5124]: E0126 00:09:57.434064 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:57 crc kubenswrapper[5124]: E0126 00:09:57.535197 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:57 crc kubenswrapper[5124]: E0126 00:09:57.636335 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:57 crc kubenswrapper[5124]: E0126 00:09:57.736861 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:57 crc kubenswrapper[5124]: E0126 00:09:57.837905 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:57 crc kubenswrapper[5124]: E0126 00:09:57.939032 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:58 crc kubenswrapper[5124]: E0126 00:09:58.039336 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:58 crc kubenswrapper[5124]: E0126 00:09:58.140454 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:58 crc kubenswrapper[5124]: E0126 00:09:58.240571 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:58 crc kubenswrapper[5124]: E0126 00:09:58.341269 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:58 crc kubenswrapper[5124]: I0126 00:09:58.365018 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:58 crc kubenswrapper[5124]: I0126 00:09:58.365852 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:58 crc kubenswrapper[5124]: I0126 00:09:58.365922 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:58 crc kubenswrapper[5124]: I0126 00:09:58.365950 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:58 crc kubenswrapper[5124]: E0126 00:09:58.366625 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:58 crc kubenswrapper[5124]: E0126 00:09:58.442370 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:58 crc kubenswrapper[5124]: E0126 00:09:58.543372 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:58 crc kubenswrapper[5124]: E0126 00:09:58.644036 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:58 crc kubenswrapper[5124]: E0126 00:09:58.744703 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:58 crc kubenswrapper[5124]: E0126 00:09:58.845257 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:58 crc kubenswrapper[5124]: E0126 00:09:58.945989 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:59 crc kubenswrapper[5124]: E0126 00:09:59.046529 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:59 crc kubenswrapper[5124]: E0126 00:09:59.147115 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:59 crc kubenswrapper[5124]: I0126 00:09:59.158721 5124 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:09:59 crc kubenswrapper[5124]: E0126 00:09:59.247519 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:59 crc kubenswrapper[5124]: E0126 00:09:59.348034 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:59 crc kubenswrapper[5124]: E0126 00:09:59.449133 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:59 crc kubenswrapper[5124]: E0126 00:09:59.549674 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:59 crc kubenswrapper[5124]: E0126 00:09:59.650841 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:59 crc kubenswrapper[5124]: E0126 00:09:59.751557 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:59 crc kubenswrapper[5124]: E0126 00:09:59.851915 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:59 crc kubenswrapper[5124]: E0126 00:09:59.952783 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:00 crc kubenswrapper[5124]: E0126 00:10:00.053908 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:00 crc kubenswrapper[5124]: E0126 00:10:00.154978 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:00 crc kubenswrapper[5124]: E0126 00:10:00.255331 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:00 crc kubenswrapper[5124]: E0126 00:10:00.355496 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:00 crc kubenswrapper[5124]: I0126 00:10:00.365184 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:00 crc kubenswrapper[5124]: I0126 00:10:00.366822 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:00 crc kubenswrapper[5124]: I0126 00:10:00.366878 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:00 crc kubenswrapper[5124]: I0126 00:10:00.366896 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:00 crc kubenswrapper[5124]: E0126 00:10:00.367559 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:00 crc kubenswrapper[5124]: I0126 00:10:00.367984 5124 scope.go:117] "RemoveContainer" containerID="54fe5054d12e34672ab0b7958239e2c701d787ac0ae96126038f4043c83abbec" Jan 26 00:10:00 crc kubenswrapper[5124]: E0126 00:10:00.455641 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:00 crc kubenswrapper[5124]: E0126 00:10:00.556658 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:00 crc kubenswrapper[5124]: E0126 00:10:00.657166 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:00 crc kubenswrapper[5124]: E0126 00:10:00.757978 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:00 crc kubenswrapper[5124]: E0126 00:10:00.858808 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:00 crc kubenswrapper[5124]: E0126 00:10:00.959887 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.060411 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.161559 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.262642 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.363848 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.463988 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.565168 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.634686 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.635566 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.637758 5124 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32" exitCode=255 Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.637831 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32"} Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.637889 5124 scope.go:117] "RemoveContainer" containerID="54fe5054d12e34672ab0b7958239e2c701d787ac0ae96126038f4043c83abbec" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.638141 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.639267 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.639376 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.639461 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.640646 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.641065 5124 scope.go:117] "RemoveContainer" containerID="6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.641391 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.665973 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.723166 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.728391 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.728459 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.728478 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.728500 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.728517 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:01Z","lastTransitionTime":"2026-01-26T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.744265 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.748731 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.748835 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.748868 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.748912 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.748963 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:01Z","lastTransitionTime":"2026-01-26T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.764998 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.769095 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.769178 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.769200 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.769229 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.769252 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:01Z","lastTransitionTime":"2026-01-26T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.783524 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.787767 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.787829 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.787845 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.787866 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:01 crc kubenswrapper[5124]: I0126 00:10:01.787882 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:01Z","lastTransitionTime":"2026-01-26T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.803244 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.803420 5124 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.803455 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:01 crc kubenswrapper[5124]: E0126 00:10:01.904058 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:02 crc kubenswrapper[5124]: E0126 00:10:02.004268 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:02 crc kubenswrapper[5124]: E0126 00:10:02.104775 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:02 crc kubenswrapper[5124]: E0126 00:10:02.205882 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:02 crc kubenswrapper[5124]: E0126 00:10:02.306300 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:02 crc kubenswrapper[5124]: E0126 00:10:02.406570 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:02 crc kubenswrapper[5124]: E0126 00:10:02.425412 5124 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:02 crc kubenswrapper[5124]: E0126 00:10:02.507272 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:02 crc kubenswrapper[5124]: E0126 00:10:02.608097 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:02 crc kubenswrapper[5124]: I0126 00:10:02.643339 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:10:02 crc kubenswrapper[5124]: E0126 00:10:02.708235 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:02 crc kubenswrapper[5124]: E0126 00:10:02.809436 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:02 crc kubenswrapper[5124]: E0126 00:10:02.910063 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:03 crc kubenswrapper[5124]: E0126 00:10:03.010356 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:03 crc kubenswrapper[5124]: E0126 00:10:03.111404 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:03 crc kubenswrapper[5124]: E0126 00:10:03.212542 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:03 crc kubenswrapper[5124]: E0126 00:10:03.312923 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:03 crc kubenswrapper[5124]: E0126 00:10:03.413981 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:03 crc kubenswrapper[5124]: E0126 00:10:03.514310 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:03 crc kubenswrapper[5124]: E0126 00:10:03.615147 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:03 crc kubenswrapper[5124]: E0126 00:10:03.715309 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:03 crc kubenswrapper[5124]: E0126 00:10:03.816386 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:03 crc kubenswrapper[5124]: E0126 00:10:03.916903 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:04 crc kubenswrapper[5124]: E0126 00:10:04.017568 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:04 crc kubenswrapper[5124]: E0126 00:10:04.117776 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:04 crc kubenswrapper[5124]: E0126 00:10:04.217908 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:04 crc kubenswrapper[5124]: E0126 00:10:04.318359 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:04 crc kubenswrapper[5124]: E0126 00:10:04.418883 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:04 crc kubenswrapper[5124]: E0126 00:10:04.519159 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:04 crc kubenswrapper[5124]: E0126 00:10:04.619918 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:04 crc kubenswrapper[5124]: E0126 00:10:04.720285 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:04 crc kubenswrapper[5124]: E0126 00:10:04.821345 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:04 crc kubenswrapper[5124]: E0126 00:10:04.921696 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:05 crc kubenswrapper[5124]: E0126 00:10:05.022204 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:05 crc kubenswrapper[5124]: E0126 00:10:05.123320 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:05 crc kubenswrapper[5124]: E0126 00:10:05.224433 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:05 crc kubenswrapper[5124]: E0126 00:10:05.325357 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:05 crc kubenswrapper[5124]: E0126 00:10:05.425478 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:05 crc kubenswrapper[5124]: E0126 00:10:05.526198 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:05 crc kubenswrapper[5124]: E0126 00:10:05.627182 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:05 crc kubenswrapper[5124]: E0126 00:10:05.727776 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:05 crc kubenswrapper[5124]: E0126 00:10:05.827925 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:05 crc kubenswrapper[5124]: E0126 00:10:05.928859 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:06 crc kubenswrapper[5124]: E0126 00:10:06.029995 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:06 crc kubenswrapper[5124]: E0126 00:10:06.130872 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:06 crc kubenswrapper[5124]: E0126 00:10:06.231706 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:06 crc kubenswrapper[5124]: E0126 00:10:06.332816 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:06 crc kubenswrapper[5124]: I0126 00:10:06.419038 5124 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:06 crc kubenswrapper[5124]: I0126 00:10:06.419417 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:06 crc kubenswrapper[5124]: I0126 00:10:06.420746 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:06 crc kubenswrapper[5124]: I0126 00:10:06.420960 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:06 crc kubenswrapper[5124]: I0126 00:10:06.421107 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:06 crc kubenswrapper[5124]: E0126 00:10:06.422035 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:06 crc kubenswrapper[5124]: I0126 00:10:06.422559 5124 scope.go:117] "RemoveContainer" containerID="6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32" Jan 26 00:10:06 crc kubenswrapper[5124]: E0126 00:10:06.423056 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:06 crc kubenswrapper[5124]: E0126 00:10:06.433543 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:06 crc kubenswrapper[5124]: E0126 00:10:06.533954 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:06 crc kubenswrapper[5124]: E0126 00:10:06.634878 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:06 crc kubenswrapper[5124]: E0126 00:10:06.735868 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:06 crc kubenswrapper[5124]: E0126 00:10:06.836410 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:06 crc kubenswrapper[5124]: E0126 00:10:06.936935 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:07 crc kubenswrapper[5124]: E0126 00:10:07.037996 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:07 crc kubenswrapper[5124]: E0126 00:10:07.138173 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:07 crc kubenswrapper[5124]: E0126 00:10:07.238806 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:07 crc kubenswrapper[5124]: E0126 00:10:07.339514 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:07 crc kubenswrapper[5124]: I0126 00:10:07.340773 5124 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:07 crc kubenswrapper[5124]: E0126 00:10:07.440004 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:07 crc kubenswrapper[5124]: E0126 00:10:07.540142 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:07 crc kubenswrapper[5124]: I0126 00:10:07.543536 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:07 crc kubenswrapper[5124]: I0126 00:10:07.544076 5124 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:07 crc kubenswrapper[5124]: I0126 00:10:07.545718 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:07 crc kubenswrapper[5124]: I0126 00:10:07.545948 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:07 crc kubenswrapper[5124]: I0126 00:10:07.546178 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:07 crc kubenswrapper[5124]: E0126 00:10:07.547577 5124 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:07 crc kubenswrapper[5124]: I0126 00:10:07.548173 5124 scope.go:117] "RemoveContainer" containerID="6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32" Jan 26 00:10:07 crc kubenswrapper[5124]: E0126 00:10:07.548724 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:07 crc kubenswrapper[5124]: E0126 00:10:07.640693 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:07 crc kubenswrapper[5124]: E0126 00:10:07.741714 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:07 crc kubenswrapper[5124]: E0126 00:10:07.842974 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:07 crc kubenswrapper[5124]: E0126 00:10:07.943655 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:08 crc kubenswrapper[5124]: E0126 00:10:08.044527 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:08 crc kubenswrapper[5124]: E0126 00:10:08.144701 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:08 crc kubenswrapper[5124]: E0126 00:10:08.245536 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:08 crc kubenswrapper[5124]: E0126 00:10:08.346729 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:08 crc kubenswrapper[5124]: E0126 00:10:08.447682 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:08 crc kubenswrapper[5124]: E0126 00:10:08.548350 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:08 crc kubenswrapper[5124]: E0126 00:10:08.648855 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:08 crc kubenswrapper[5124]: E0126 00:10:08.749927 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:08 crc kubenswrapper[5124]: E0126 00:10:08.850362 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:08 crc kubenswrapper[5124]: E0126 00:10:08.951741 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:09 crc kubenswrapper[5124]: E0126 00:10:09.052889 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:09 crc kubenswrapper[5124]: E0126 00:10:09.154285 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:09 crc kubenswrapper[5124]: E0126 00:10:09.254762 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:09 crc kubenswrapper[5124]: E0126 00:10:09.355503 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:09 crc kubenswrapper[5124]: E0126 00:10:09.455886 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:09 crc kubenswrapper[5124]: E0126 00:10:09.556471 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:09 crc kubenswrapper[5124]: E0126 00:10:09.657393 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:09 crc kubenswrapper[5124]: E0126 00:10:09.758216 5124 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.761449 5124 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.809016 5124 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.821935 5124 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.859835 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.859904 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.859922 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.859946 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.859963 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:09Z","lastTransitionTime":"2026-01-26T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.920628 5124 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.962358 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.962420 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.962445 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.962476 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:09 crc kubenswrapper[5124]: I0126 00:10:09.962499 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:09Z","lastTransitionTime":"2026-01-26T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.021992 5124 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.064511 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.064550 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.064558 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.064571 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.064581 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:10Z","lastTransitionTime":"2026-01-26T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.125131 5124 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.166961 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.167004 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.167016 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.167030 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.167039 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:10Z","lastTransitionTime":"2026-01-26T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.269921 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.269972 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.269986 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.270003 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.270015 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:10Z","lastTransitionTime":"2026-01-26T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.356402 5124 apiserver.go:52] "Watching apiserver" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.364529 5124 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.365482 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/machine-config-daemon-kmxcn","openshift-multus/network-metrics-daemon-sctbw","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-dns/node-resolver-cwsts","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/multus-additional-cni-plugins-87scd","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-image-registry/node-ca-6grfh","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/multus-smnb7","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-sdh5t","openshift-etcd/etcd-crc"] Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.367413 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.369714 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.369799 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.370191 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.370697 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.370954 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.371397 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.371434 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.371448 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.371467 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.371479 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:10Z","lastTransitionTime":"2026-01-26T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.372946 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.373069 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.373166 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.373799 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.375545 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.375498 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.375799 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.375941 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.376209 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.376407 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.384407 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.393894 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.394008 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.394507 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6grfh" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.395179 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.400325 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.401362 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.401990 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.402680 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.412018 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cwsts" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.412794 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.412862 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.413434 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.415921 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.416264 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.416819 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.417095 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.417320 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.417478 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.417500 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.418158 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.421011 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.421281 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.422739 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.423089 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.423445 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.423646 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.423854 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.423875 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.424075 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.424103 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.424888 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.424996 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.425068 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.427709 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.431301 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.434778 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.435033 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.435272 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.435305 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.435441 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.437778 5124 scope.go:117] "RemoveContainer" containerID="6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.438195 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.438193 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.439685 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.440112 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.440002 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.450377 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.464041 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sctbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sctbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.473561 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.473625 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.473638 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.473657 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.473671 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:10Z","lastTransitionTime":"2026-01-26T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.481659 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.481712 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9e5684ab-0b94-4eef-af30-0c6c4ab528af-hosts-file\") pod \"node-resolver-cwsts\" (UID: \"9e5684ab-0b94-4eef-af30-0c6c4ab528af\") " pod="openshift-dns/node-resolver-cwsts" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.481741 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.481826 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8d8\" (UniqueName: \"kubernetes.io/projected/9e5684ab-0b94-4eef-af30-0c6c4ab528af-kube-api-access-dd8d8\") pod \"node-resolver-cwsts\" (UID: \"9e5684ab-0b94-4eef-af30-0c6c4ab528af\") " pod="openshift-dns/node-resolver-cwsts" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.481951 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.482012 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6jf9\" (UniqueName: \"kubernetes.io/projected/45a1a609-6066-42a0-a450-b0e70365aa9b-kube-api-access-j6jf9\") pod \"node-ca-6grfh\" (UID: \"45a1a609-6066-42a0-a450-b0e70365aa9b\") " pod="openshift-image-registry/node-ca-6grfh" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.482040 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8660dad9-43c8-4c00-872a-e00a6baab0f7-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-mpdlk\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.482080 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.482102 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.482158 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9e5684ab-0b94-4eef-af30-0c6c4ab528af-tmp-dir\") pod \"node-resolver-cwsts\" (UID: \"9e5684ab-0b94-4eef-af30-0c6c4ab528af\") " pod="openshift-dns/node-resolver-cwsts" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.482201 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.482295 5124 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.482425 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.482476 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45a1a609-6066-42a0-a450-b0e70365aa9b-serviceca\") pod \"node-ca-6grfh\" (UID: \"45a1a609-6066-42a0-a450-b0e70365aa9b\") " pod="openshift-image-registry/node-ca-6grfh" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.482630 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.482666 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a1a609-6066-42a0-a450-b0e70365aa9b-host\") pod \"node-ca-6grfh\" (UID: \"45a1a609-6066-42a0-a450-b0e70365aa9b\") " pod="openshift-image-registry/node-ca-6grfh" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.482874 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8660dad9-43c8-4c00-872a-e00a6baab0f7-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-mpdlk\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.482989 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.483030 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx9l8\" (UniqueName: \"kubernetes.io/projected/8660dad9-43c8-4c00-872a-e00a6baab0f7-kube-api-access-lx9l8\") pod \"ovnkube-control-plane-57b78d8988-mpdlk\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.483296 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.483327 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.483371 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.483394 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8660dad9-43c8-4c00-872a-e00a6baab0f7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-mpdlk\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.483420 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.483468 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.483512 5124 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.483617 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:10.983554882 +0000 UTC m=+88.892474241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.485056 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:10.98502149 +0000 UTC m=+88.893940849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.485394 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.485659 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.485761 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.487996 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df9bb628-c0ff-4254-8f43-66c1d289b343\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://94702470d0dd24faac34520e06613c5897b79dde56d2897fabe3a52050980120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4647208e6c84a5a6977c9b5f4a59a5a2ec2b2957cb47ea0707851ab13bef96ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b64f819f442260b8aaac091fe6a09b99175d27d2ec944332d5977a5ca5af58f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d6d8389d6d15bd747b8ef74dc30f010429f962e34fe75b84935720929eab5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd70d62ee532dd5a0aa8e04beb99f336153670709121aa892e5fa90aca675a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.497652 5124 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.502184 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.502256 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.502278 5124 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.502684 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:11.002387791 +0000 UTC m=+88.911307150 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.504723 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.505093 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.508121 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.508157 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.508173 5124 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.508261 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:11.008236926 +0000 UTC m=+88.917156345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.508276 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.509407 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.510512 5124 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.514299 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.518211 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.522781 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.532811 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8660dad9-43c8-4c00-872a-e00a6baab0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx9l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx9l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-mpdlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.545662 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-smnb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f826f136-a910-4120-aa62-a08e427590c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbqfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-smnb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.556948 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e553c5-fff7-48ff-8b44-c86ab881b7bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8da7cf7985b3076f734741cd805f8a4f273d7620fc89a9f9d02fa906489960c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://56cb10ea63f74e8cb16b42dc94949b4ddf748e8fdf73c942fb868db9001364e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27a7b88896a26f50315b57e5bff7d5ec0511f09f0acb636c09e3c76caf1c686b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.566303 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.572878 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6grfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a1a609-6066-42a0-a450-b0e70365aa9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6jf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6grfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.577236 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.577270 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.577281 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.577295 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.577306 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:10Z","lastTransitionTime":"2026-01-26T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.578787 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-cwsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e5684ab-0b94-4eef-af30-0c6c4ab528af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd8d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cwsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.584162 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.584326 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.584373 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.584557 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.584626 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.584657 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.584716 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.584748 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.584777 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.584768 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.584806 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.584847 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:11.084817316 +0000 UTC m=+88.993736685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.584913 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585003 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585031 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585069 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585097 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585125 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585151 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585155 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585174 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585202 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585230 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585246 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585257 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585269 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585651 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.585742 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586290 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586338 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586371 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586405 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586436 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586461 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586494 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586523 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586550 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586558 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586577 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586672 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586704 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586730 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586759 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586793 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586812 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586824 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586889 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.586939 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587012 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587036 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587074 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587104 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587128 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587152 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587175 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587207 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587236 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587268 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587300 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587329 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587359 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587387 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587417 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587118 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.588404 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.588465 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.588487 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587180 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587198 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587218 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587406 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587599 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587692 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587809 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587906 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.588057 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.588189 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.587968 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.588626 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.588987 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589122 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589175 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589197 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589216 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589249 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589278 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589310 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589343 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589372 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589396 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589423 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589452 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589470 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589495 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589529 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589557 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589566 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589601 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589617 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589767 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.589782 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590061 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590116 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590177 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590329 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590339 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590369 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590449 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590731 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590775 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590779 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590797 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590824 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590844 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590864 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590886 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590910 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590935 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590954 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590977 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591001 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.590997 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591025 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591046 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591068 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591088 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591107 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591125 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591152 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591174 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591192 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591211 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591230 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591248 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591274 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591283 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591297 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591390 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591395 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591422 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591518 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591531 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591565 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591578 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591610 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591611 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591629 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591701 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591732 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591793 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591800 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591828 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591865 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591898 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591925 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591953 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.591981 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.592005 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.592169 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.592355 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.592176 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.592116 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.592871 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593008 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593080 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593117 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593162 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593198 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593228 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593258 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593324 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593323 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593449 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593812 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593856 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593894 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593933 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.593476 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.594157 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595101 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595236 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595237 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595411 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595569 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595631 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595654 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595679 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595703 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595726 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595749 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595752 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595774 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595798 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595823 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595858 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595885 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595910 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595932 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.595973 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596000 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596027 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596050 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596072 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596084 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596094 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596150 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596155 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596159 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596186 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596201 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596215 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596239 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596247 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596274 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596298 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596306 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596308 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596323 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596370 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596413 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596657 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596673 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596678 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596778 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596807 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596827 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596841 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596847 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596923 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.596982 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597040 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597065 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597090 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597109 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597130 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597153 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597169 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597537 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597560 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597577 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597959 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597987 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598004 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598024 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598040 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598058 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598078 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598098 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598122 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598140 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598158 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598178 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598197 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598219 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598248 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598274 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598299 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598324 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598353 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598376 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598398 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598418 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598441 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598468 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598489 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598513 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598531 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598549 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598571 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598607 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598627 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598647 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598669 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598690 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598708 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598725 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598744 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599168 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599213 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599296 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599334 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599361 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599391 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599419 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599451 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599491 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599521 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599556 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599605 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599638 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599671 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599702 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599731 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599761 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599792 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599820 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599847 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599874 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599902 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599929 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599958 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600055 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lx9l8\" (UniqueName: \"kubernetes.io/projected/8660dad9-43c8-4c00-872a-e00a6baab0f7-kube-api-access-lx9l8\") pod \"ovnkube-control-plane-57b78d8988-mpdlk\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600091 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-multus-cni-dir\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600118 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-var-lib-cni-multus\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600145 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbqfv\" (UniqueName: \"kubernetes.io/projected/f826f136-a910-4120-aa62-a08e427590c0-kube-api-access-gbqfv\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597010 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600179 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8660dad9-43c8-4c00-872a-e00a6baab0f7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-mpdlk\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600205 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-log-socket\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597029 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600233 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-run-ovn-kubernetes\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600264 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfkp9\" (UniqueName: \"kubernetes.io/projected/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-kube-api-access-pfkp9\") pod \"network-metrics-daemon-sctbw\" (UID: \"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\") " pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.601109 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-cnibin\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604208 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-run-k8s-cni-cncf-io\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604288 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb6p6\" (UniqueName: \"kubernetes.io/projected/5c96023c-09ac-49d0-b8bd-09f46f6d9655-kube-api-access-nb6p6\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604323 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-systemd-units\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604350 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-openvswitch\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604384 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-node-log\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604412 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovnkube-config\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604438 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-var-lib-kubelet\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604465 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-etc-kubernetes\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604534 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dd8d8\" (UniqueName: \"kubernetes.io/projected/9e5684ab-0b94-4eef-af30-0c6c4ab528af-kube-api-access-dd8d8\") pod \"node-resolver-cwsts\" (UID: \"9e5684ab-0b94-4eef-af30-0c6c4ab528af\") " pod="openshift-dns/node-resolver-cwsts" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604564 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604612 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5c96023c-09ac-49d0-b8bd-09f46f6d9655-os-release\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604641 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-var-lib-openvswitch\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604668 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f826f136-a910-4120-aa62-a08e427590c0-multus-daemon-config\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604695 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-kubelet\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604723 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-slash\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604753 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-system-cni-dir\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604789 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/95fa0656-150a-4d93-a324-77a1306d91f7-rootfs\") pod \"machine-config-daemon-kmxcn\" (UID: \"95fa0656-150a-4d93-a324-77a1306d91f7\") " pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604814 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-ovn\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604838 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-env-overrides\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604874 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a1a609-6066-42a0-a450-b0e70365aa9b-host\") pod \"node-ca-6grfh\" (UID: \"45a1a609-6066-42a0-a450-b0e70365aa9b\") " pod="openshift-image-registry/node-ca-6grfh" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604905 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5c96023c-09ac-49d0-b8bd-09f46f6d9655-cnibin\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604935 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5c96023c-09ac-49d0-b8bd-09f46f6d9655-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604966 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-os-release\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604994 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-run-multus-certs\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605031 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-etc-openvswitch\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605058 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs\") pod \"network-metrics-daemon-sctbw\" (UID: \"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\") " pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605105 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605134 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-cni-bin\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605159 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-cni-netd\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605214 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95fa0656-150a-4d93-a324-77a1306d91f7-proxy-tls\") pod \"machine-config-daemon-kmxcn\" (UID: \"95fa0656-150a-4d93-a324-77a1306d91f7\") " pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605242 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95fa0656-150a-4d93-a324-77a1306d91f7-mcd-auth-proxy-config\") pod \"machine-config-daemon-kmxcn\" (UID: \"95fa0656-150a-4d93-a324-77a1306d91f7\") " pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605928 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5c96023c-09ac-49d0-b8bd-09f46f6d9655-cni-binary-copy\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.606081 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.606276 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.606398 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a1a609-6066-42a0-a450-b0e70365aa9b-host\") pod \"node-ca-6grfh\" (UID: \"45a1a609-6066-42a0-a450-b0e70365aa9b\") " pod="openshift-image-registry/node-ca-6grfh" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.606743 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-var-lib-cni-bin\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.606785 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-hostroot\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.606813 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-multus-conf-dir\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607248 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9e5684ab-0b94-4eef-af30-0c6c4ab528af-hosts-file\") pod \"node-resolver-cwsts\" (UID: \"9e5684ab-0b94-4eef-af30-0c6c4ab528af\") " pod="openshift-dns/node-resolver-cwsts" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607346 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9e5684ab-0b94-4eef-af30-0c6c4ab528af-hosts-file\") pod \"node-resolver-cwsts\" (UID: \"9e5684ab-0b94-4eef-af30-0c6c4ab528af\") " pod="openshift-dns/node-resolver-cwsts" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607383 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5c96023c-09ac-49d0-b8bd-09f46f6d9655-system-cni-dir\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607419 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-multus-socket-dir-parent\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607442 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-run-netns\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607485 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j6jf9\" (UniqueName: \"kubernetes.io/projected/45a1a609-6066-42a0-a450-b0e70365aa9b-kube-api-access-j6jf9\") pod \"node-ca-6grfh\" (UID: \"45a1a609-6066-42a0-a450-b0e70365aa9b\") " pod="openshift-image-registry/node-ca-6grfh" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607513 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8660dad9-43c8-4c00-872a-e00a6baab0f7-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-mpdlk\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607538 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5c96023c-09ac-49d0-b8bd-09f46f6d9655-tuning-conf-dir\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607652 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9e5684ab-0b94-4eef-af30-0c6c4ab528af-tmp-dir\") pod \"node-resolver-cwsts\" (UID: \"9e5684ab-0b94-4eef-af30-0c6c4ab528af\") " pod="openshift-dns/node-resolver-cwsts" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607712 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xt6k\" (UniqueName: \"kubernetes.io/projected/95fa0656-150a-4d93-a324-77a1306d91f7-kube-api-access-4xt6k\") pod \"machine-config-daemon-kmxcn\" (UID: \"95fa0656-150a-4d93-a324-77a1306d91f7\") " pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607747 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sphjf\" (UniqueName: \"kubernetes.io/projected/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-kube-api-access-sphjf\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607781 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f826f136-a910-4120-aa62-a08e427590c0-cni-binary-copy\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.608029 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45a1a609-6066-42a0-a450-b0e70365aa9b-serviceca\") pod \"node-ca-6grfh\" (UID: \"45a1a609-6066-42a0-a450-b0e70365aa9b\") " pod="openshift-image-registry/node-ca-6grfh" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.608260 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5c96023c-09ac-49d0-b8bd-09f46f6d9655-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.608356 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-run-netns\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.608436 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.608550 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovn-node-metrics-cert\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597108 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.608786 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597175 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597359 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597538 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597677 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597701 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.597785 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598000 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598226 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598172 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598446 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598745 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598790 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598801 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598908 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598912 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.598945 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599044 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599114 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599210 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599761 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599776 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599786 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.599959 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600082 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600111 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600129 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600320 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.600954 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.601007 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.601013 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.601027 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.601159 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.601516 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.601582 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.601892 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602032 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602027 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602053 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602099 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602285 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602359 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602380 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602528 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602543 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602499 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.609145 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8660dad9-43c8-4c00-872a-e00a6baab0f7-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-mpdlk\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.609146 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602648 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602836 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602944 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.602900 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.603145 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.603200 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.603386 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.603820 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.603992 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604025 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.603896 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604037 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604136 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604252 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604281 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604386 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604547 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604651 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604664 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604735 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604917 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604944 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.604941 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605001 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605065 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605206 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605226 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605433 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605685 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605697 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605680 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605831 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605835 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605912 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605990 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.605998 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.606013 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.606197 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.606261 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.606273 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.606376 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.607054 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.608319 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.609454 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9e5684ab-0b94-4eef-af30-0c6c4ab528af-tmp-dir\") pod \"node-resolver-cwsts\" (UID: \"9e5684ab-0b94-4eef-af30-0c6c4ab528af\") " pod="openshift-dns/node-resolver-cwsts" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.609214 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-systemd\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.609512 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovnkube-script-lib\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.609508 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.608325 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.608397 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.608466 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.608845 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.609315 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sdh5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.609854 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8660dad9-43c8-4c00-872a-e00a6baab0f7-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-mpdlk\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610082 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8660dad9-43c8-4c00-872a-e00a6baab0f7-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-mpdlk\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610432 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610448 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610460 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610471 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610483 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610497 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610508 5124 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610520 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610532 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610541 5124 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610551 5124 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610562 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610573 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610598 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610609 5124 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610619 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611015 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611028 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611038 5124 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611049 5124 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611059 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.610714 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611087 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611140 5124 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611186 5124 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611129 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611211 5124 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611230 5124 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611227 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611246 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611283 5124 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611319 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611490 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611535 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611581 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611818 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611836 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611846 5124 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611854 5124 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611863 5124 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611873 5124 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611887 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611896 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611905 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611913 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611922 5124 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611930 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611938 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611949 5124 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611958 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611967 5124 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611975 5124 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611984 5124 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.611993 5124 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612013 5124 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612021 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612030 5124 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612038 5124 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612047 5124 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612055 5124 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612066 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612074 5124 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612082 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612092 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612100 5124 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612108 5124 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612117 5124 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612125 5124 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612170 5124 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612182 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612192 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612201 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612210 5124 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612218 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612228 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612236 5124 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612244 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612253 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612262 5124 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612271 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612279 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612290 5124 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612298 5124 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612307 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612316 5124 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612079 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45a1a609-6066-42a0-a450-b0e70365aa9b-serviceca\") pod \"node-ca-6grfh\" (UID: \"45a1a609-6066-42a0-a450-b0e70365aa9b\") " pod="openshift-image-registry/node-ca-6grfh" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.612987 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.617551 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8660dad9-43c8-4c00-872a-e00a6baab0f7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-mpdlk\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.618326 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.618345 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.618374 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.618575 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.618641 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.618725 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.619149 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.619289 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.619413 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.619689 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.619942 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.619878 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.620008 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.620262 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.620395 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c99ae5-3448-4d7b-9141-781a3683de72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a04fa4d6993fe4e83a7bd2d552bb16d9dc8e33e89a789170b8fec180c65b793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a4d65f95ca5f832e6ac85de46fd3d474221c3263ab1c2eba3123e4742fc5287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da17ce8ac77c94210b966d6bc7b376e82189a903321c9800662d2c12abf965d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://effeb6003c974dc677094f47337b7bf2ba1dad9209e7f72af53b5ac7d069f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.620872 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.621610 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.624936 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.625067 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.626275 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6jf9\" (UniqueName: \"kubernetes.io/projected/45a1a609-6066-42a0-a450-b0e70365aa9b-kube-api-access-j6jf9\") pod \"node-ca-6grfh\" (UID: \"45a1a609-6066-42a0-a450-b0e70365aa9b\") " pod="openshift-image-registry/node-ca-6grfh" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.626353 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.627826 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd8d8\" (UniqueName: \"kubernetes.io/projected/9e5684ab-0b94-4eef-af30-0c6c4ab528af-kube-api-access-dd8d8\") pod \"node-resolver-cwsts\" (UID: \"9e5684ab-0b94-4eef-af30-0c6c4ab528af\") " pod="openshift-dns/node-resolver-cwsts" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.628472 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.628858 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.629162 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.633995 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx9l8\" (UniqueName: \"kubernetes.io/projected/8660dad9-43c8-4c00-872a-e00a6baab0f7-kube-api-access-lx9l8\") pod \"ovnkube-control-plane-57b78d8988-mpdlk\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.634224 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.634571 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.634916 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.634925 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.635342 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.635699 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.636146 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.636180 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.636217 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.636250 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.636354 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.636351 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.636670 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.636666 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.636697 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.637279 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.637452 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.637632 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.647867 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.649346 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c96023c-09ac-49d0-b8bd-09f46f6d9655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-87scd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.651457 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.657717 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95fa0656-150a-4d93-a324-77a1306d91f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xt6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xt6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kmxcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.664270 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e4f768-137c-4c5c-878d-3852f54a6df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4382e3a3d54a3ceaf116dd5c6f7f458833943f7e948dc335bc038b3267463d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.667735 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.675168 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fa44516-2654-456d-893a-96341101557c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:01.118231 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:01.118416 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:01.121827 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107044536/tls.crt::/tmp/serving-cert-107044536/tls.key\\\\\\\"\\\\nI0126 00:10:01.529054 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:01.532621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:01.532658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:01.532703 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:01.532730 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:01.539927 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:01.539960 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:01.539981 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.539994 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.540005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:01.540013 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:01.540020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:01.540025 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:01.543048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.678883 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.678920 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.678931 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.678945 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.678956 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:10Z","lastTransitionTime":"2026-01-26T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.684379 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.691535 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.708665 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.709121 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:10 crc kubenswrapper[5124]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:10 crc kubenswrapper[5124]: set -o allexport Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: source /etc/kubernetes/apiserver-url.env Jan 26 00:10:10 crc kubenswrapper[5124]: else Jan 26 00:10:10 crc kubenswrapper[5124]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 26 00:10:10 crc kubenswrapper[5124]: exit 1 Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 26 00:10:10 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:10 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.710813 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.712907 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/95fa0656-150a-4d93-a324-77a1306d91f7-rootfs\") pod \"machine-config-daemon-kmxcn\" (UID: \"95fa0656-150a-4d93-a324-77a1306d91f7\") " pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713069 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-ovn\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713193 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-env-overrides\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713299 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5c96023c-09ac-49d0-b8bd-09f46f6d9655-cnibin\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713441 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5c96023c-09ac-49d0-b8bd-09f46f6d9655-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713566 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-os-release\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713741 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5c96023c-09ac-49d0-b8bd-09f46f6d9655-cnibin\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713748 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-run-multus-certs\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713827 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-etc-openvswitch\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713847 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs\") pod \"network-metrics-daemon-sctbw\" (UID: \"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\") " pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713871 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-cni-bin\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713888 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-cni-netd\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713905 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95fa0656-150a-4d93-a324-77a1306d91f7-proxy-tls\") pod \"machine-config-daemon-kmxcn\" (UID: \"95fa0656-150a-4d93-a324-77a1306d91f7\") " pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713913 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-env-overrides\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713923 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95fa0656-150a-4d93-a324-77a1306d91f7-mcd-auth-proxy-config\") pod \"machine-config-daemon-kmxcn\" (UID: \"95fa0656-150a-4d93-a324-77a1306d91f7\") " pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713995 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5c96023c-09ac-49d0-b8bd-09f46f6d9655-cni-binary-copy\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714024 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-var-lib-cni-bin\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714051 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-hostroot\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714299 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-multus-conf-dir\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714374 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5c96023c-09ac-49d0-b8bd-09f46f6d9655-system-cni-dir\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714419 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-multus-socket-dir-parent\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714445 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95fa0656-150a-4d93-a324-77a1306d91f7-mcd-auth-proxy-config\") pod \"machine-config-daemon-kmxcn\" (UID: \"95fa0656-150a-4d93-a324-77a1306d91f7\") " pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713133 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-ovn\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714495 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-etc-openvswitch\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714547 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-run-netns\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.714561 5124 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714574 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5c96023c-09ac-49d0-b8bd-09f46f6d9655-tuning-conf-dir\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.714614 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs podName:08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b nodeName:}" failed. No retries permitted until 2026-01-26 00:10:11.214600247 +0000 UTC m=+89.123519596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs") pod "network-metrics-daemon-sctbw" (UID: "08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714651 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4xt6k\" (UniqueName: \"kubernetes.io/projected/95fa0656-150a-4d93-a324-77a1306d91f7-kube-api-access-4xt6k\") pod \"machine-config-daemon-kmxcn\" (UID: \"95fa0656-150a-4d93-a324-77a1306d91f7\") " pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714670 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sphjf\" (UniqueName: \"kubernetes.io/projected/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-kube-api-access-sphjf\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714688 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f826f136-a910-4120-aa62-a08e427590c0-cni-binary-copy\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714707 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5c96023c-09ac-49d0-b8bd-09f46f6d9655-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714724 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-run-netns\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714739 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714758 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovn-node-metrics-cert\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714775 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-systemd\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714791 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovnkube-script-lib\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714809 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-multus-cni-dir\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714827 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-var-lib-cni-multus\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714843 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbqfv\" (UniqueName: \"kubernetes.io/projected/f826f136-a910-4120-aa62-a08e427590c0-kube-api-access-gbqfv\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714864 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-log-socket\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714879 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-run-ovn-kubernetes\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714894 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pfkp9\" (UniqueName: \"kubernetes.io/projected/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-kube-api-access-pfkp9\") pod \"network-metrics-daemon-sctbw\" (UID: \"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\") " pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714910 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-cnibin\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714933 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-run-k8s-cni-cncf-io\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714958 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nb6p6\" (UniqueName: \"kubernetes.io/projected/5c96023c-09ac-49d0-b8bd-09f46f6d9655-kube-api-access-nb6p6\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714973 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-systemd-units\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714990 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-openvswitch\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715005 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-node-log\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715020 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovnkube-config\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715035 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-var-lib-kubelet\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715050 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-etc-kubernetes\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715072 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5c96023c-09ac-49d0-b8bd-09f46f6d9655-os-release\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715086 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-var-lib-openvswitch\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715101 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f826f136-a910-4120-aa62-a08e427590c0-multus-daemon-config\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715117 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-kubelet\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715132 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-slash\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715149 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-system-cni-dir\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715196 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715206 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715216 5124 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715225 5124 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715234 5124 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715243 5124 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715253 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715264 5124 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715273 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715283 5124 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715293 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715302 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715311 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715320 5124 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715330 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715339 5124 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715337 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5c96023c-09ac-49d0-b8bd-09f46f6d9655-cni-binary-copy\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715349 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715359 5124 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715384 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-var-lib-cni-bin\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.714737 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5c96023c-09ac-49d0-b8bd-09f46f6d9655-tuning-conf-dir\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715426 5124 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715433 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-run-netns\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715453 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-cni-bin\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.715476 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-cni-netd\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716246 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-systemd\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.713086 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/95fa0656-150a-4d93-a324-77a1306d91f7-rootfs\") pod \"machine-config-daemon-kmxcn\" (UID: \"95fa0656-150a-4d93-a324-77a1306d91f7\") " pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716367 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-var-lib-kubelet\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716403 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-var-lib-openvswitch\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716564 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-os-release\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716757 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-systemd-units\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716760 5124 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716828 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-multus-cni-dir\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716839 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-run-multus-certs\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716889 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5c96023c-09ac-49d0-b8bd-09f46f6d9655-os-release\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716932 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-cnibin\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716940 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-etc-kubernetes\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716960 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-var-lib-cni-multus\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716966 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-kubelet\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.716988 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-slash\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717008 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-hostroot\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717030 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-run-netns\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717059 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-system-cni-dir\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717081 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717101 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-multus-conf-dir\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717238 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f826f136-a910-4120-aa62-a08e427590c0-cni-binary-copy\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717311 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-openvswitch\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717358 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-host-run-k8s-cni-cncf-io\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717473 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f826f136-a910-4120-aa62-a08e427590c0-multus-daemon-config\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717519 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-node-log\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717562 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5c96023c-09ac-49d0-b8bd-09f46f6d9655-system-cni-dir\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717629 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-run-ovn-kubernetes\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717659 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-log-socket\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717724 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f826f136-a910-4120-aa62-a08e427590c0-multus-socket-dir-parent\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717898 5124 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717918 5124 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.717992 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5c96023c-09ac-49d0-b8bd-09f46f6d9655-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718126 5124 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718141 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718153 5124 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718164 5124 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718175 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718188 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718198 5124 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718207 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718217 5124 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718229 5124 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718239 5124 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718249 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718260 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718270 5124 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718280 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718325 5124 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718337 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718347 5124 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718375 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovn-node-metrics-cert\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718984 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovnkube-config\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718988 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95fa0656-150a-4d93-a324-77a1306d91f7-proxy-tls\") pod \"machine-config-daemon-kmxcn\" (UID: \"95fa0656-150a-4d93-a324-77a1306d91f7\") " pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719126 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovnkube-script-lib\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.718526 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719717 5124 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719738 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719752 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719766 5124 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719806 5124 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719823 5124 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719837 5124 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719849 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719891 5124 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719906 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719924 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719937 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.719974 5124 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.720575 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.720694 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.720708 5124 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.720719 5124 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.720734 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.720747 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.720785 5124 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.720800 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.720812 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.720923 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.720940 5124 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.721008 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.721053 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.721063 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.721074 5124 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.721084 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.721467 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5c96023c-09ac-49d0-b8bd-09f46f6d9655-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.727109 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:10 crc kubenswrapper[5124]: W0126 00:10:10.729642 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-acf288bcd2bf165b0f0a8fd4e88910edf86305c3bc7f7071c089830025f7da13 WatchSource:0}: Error finding container acf288bcd2bf165b0f0a8fd4e88910edf86305c3bc7f7071c089830025f7da13: Status 404 returned error can't find the container with id acf288bcd2bf165b0f0a8fd4e88910edf86305c3bc7f7071c089830025f7da13 Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730365 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730397 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730406 5124 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730416 5124 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730426 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730437 5124 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730448 5124 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730459 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730468 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730478 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730487 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730496 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730505 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730514 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730523 5124 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730531 5124 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730540 5124 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730548 5124 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730557 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730634 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730643 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730654 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730663 5124 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730671 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730681 5124 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730690 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730698 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730709 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730721 5124 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730730 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730740 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730750 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730761 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730770 5124 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730778 5124 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730789 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730800 5124 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730808 5124 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730818 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730831 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730841 5124 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730849 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730857 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730868 5124 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730878 5124 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730887 5124 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730897 5124 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730905 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730913 5124 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730921 5124 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730930 5124 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730938 5124 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730945 5124 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730953 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730963 5124 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730971 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730981 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730989 5124 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.730999 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731007 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731016 5124 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731025 5124 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731034 5124 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731044 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731055 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731064 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731074 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731082 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731090 5124 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731098 5124 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731107 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.731116 5124 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.732667 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbqfv\" (UniqueName: \"kubernetes.io/projected/f826f136-a910-4120-aa62-a08e427590c0-kube-api-access-gbqfv\") pod \"multus-smnb7\" (UID: \"f826f136-a910-4120-aa62-a08e427590c0\") " pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.733579 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfkp9\" (UniqueName: \"kubernetes.io/projected/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-kube-api-access-pfkp9\") pod \"network-metrics-daemon-sctbw\" (UID: \"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\") " pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.733894 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:10 crc kubenswrapper[5124]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: set -o allexport Jan 26 00:10:10 crc kubenswrapper[5124]: source "/env/_master" Jan 26 00:10:10 crc kubenswrapper[5124]: set +o allexport Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 26 00:10:10 crc kubenswrapper[5124]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 26 00:10:10 crc kubenswrapper[5124]: ho_enable="--enable-hybrid-overlay" Jan 26 00:10:10 crc kubenswrapper[5124]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 26 00:10:10 crc kubenswrapper[5124]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 26 00:10:10 crc kubenswrapper[5124]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 26 00:10:10 crc kubenswrapper[5124]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:10 crc kubenswrapper[5124]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 26 00:10:10 crc kubenswrapper[5124]: --webhook-host=127.0.0.1 \ Jan 26 00:10:10 crc kubenswrapper[5124]: --webhook-port=9743 \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${ho_enable} \ Jan 26 00:10:10 crc kubenswrapper[5124]: --enable-interconnect \ Jan 26 00:10:10 crc kubenswrapper[5124]: --disable-approver \ Jan 26 00:10:10 crc kubenswrapper[5124]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 26 00:10:10 crc kubenswrapper[5124]: --wait-for-kubernetes-api=200s \ Jan 26 00:10:10 crc kubenswrapper[5124]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 26 00:10:10 crc kubenswrapper[5124]: --loglevel="${LOGLEVEL}" Jan 26 00:10:10 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:10 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.736666 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:10 crc kubenswrapper[5124]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: set -o allexport Jan 26 00:10:10 crc kubenswrapper[5124]: source "/env/_master" Jan 26 00:10:10 crc kubenswrapper[5124]: set +o allexport Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 26 00:10:10 crc kubenswrapper[5124]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:10 crc kubenswrapper[5124]: --disable-webhook \ Jan 26 00:10:10 crc kubenswrapper[5124]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 26 00:10:10 crc kubenswrapper[5124]: --loglevel="${LOGLEVEL}" Jan 26 00:10:10 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:10 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.737757 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 26 00:10:10 crc kubenswrapper[5124]: W0126 00:10:10.738701 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-93ce8f9318165ac8c39243e11a7df39f27b5dbe551261f07c84767e817d1b4ad WatchSource:0}: Error finding container 93ce8f9318165ac8c39243e11a7df39f27b5dbe551261f07c84767e817d1b4ad: Status 404 returned error can't find the container with id 93ce8f9318165ac8c39243e11a7df39f27b5dbe551261f07c84767e817d1b4ad Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.738855 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb6p6\" (UniqueName: \"kubernetes.io/projected/5c96023c-09ac-49d0-b8bd-09f46f6d9655-kube-api-access-nb6p6\") pod \"multus-additional-cni-plugins-87scd\" (UID: \"5c96023c-09ac-49d0-b8bd-09f46f6d9655\") " pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.739930 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xt6k\" (UniqueName: \"kubernetes.io/projected/95fa0656-150a-4d93-a324-77a1306d91f7-kube-api-access-4xt6k\") pod \"machine-config-daemon-kmxcn\" (UID: \"95fa0656-150a-4d93-a324-77a1306d91f7\") " pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.741278 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sphjf\" (UniqueName: \"kubernetes.io/projected/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-kube-api-access-sphjf\") pod \"ovnkube-node-sdh5t\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.741289 5124 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.742636 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.743734 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6grfh" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.754850 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cwsts" Jan 26 00:10:10 crc kubenswrapper[5124]: W0126 00:10:10.755487 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45a1a609_6066_42a0_a450_b0e70365aa9b.slice/crio-f8154497ef78652105df3db962a47e28d2ab3e899e11f0c1dad70179435b89c9 WatchSource:0}: Error finding container f8154497ef78652105df3db962a47e28d2ab3e899e11f0c1dad70179435b89c9: Status 404 returned error can't find the container with id f8154497ef78652105df3db962a47e28d2ab3e899e11f0c1dad70179435b89c9 Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.759161 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:10 crc kubenswrapper[5124]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 26 00:10:10 crc kubenswrapper[5124]: while [ true ]; Jan 26 00:10:10 crc kubenswrapper[5124]: do Jan 26 00:10:10 crc kubenswrapper[5124]: for f in $(ls /tmp/serviceca); do Jan 26 00:10:10 crc kubenswrapper[5124]: echo $f Jan 26 00:10:10 crc kubenswrapper[5124]: ca_file_path="/tmp/serviceca/${f}" Jan 26 00:10:10 crc kubenswrapper[5124]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 26 00:10:10 crc kubenswrapper[5124]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 26 00:10:10 crc kubenswrapper[5124]: if [ -e "${reg_dir_path}" ]; then Jan 26 00:10:10 crc kubenswrapper[5124]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:10 crc kubenswrapper[5124]: else Jan 26 00:10:10 crc kubenswrapper[5124]: mkdir $reg_dir_path Jan 26 00:10:10 crc kubenswrapper[5124]: cp $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: done Jan 26 00:10:10 crc kubenswrapper[5124]: for d in $(ls /etc/docker/certs.d); do Jan 26 00:10:10 crc kubenswrapper[5124]: echo $d Jan 26 00:10:10 crc kubenswrapper[5124]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 26 00:10:10 crc kubenswrapper[5124]: reg_conf_path="/tmp/serviceca/${dp}" Jan 26 00:10:10 crc kubenswrapper[5124]: if [ ! -e "${reg_conf_path}" ]; then Jan 26 00:10:10 crc kubenswrapper[5124]: rm -rf /etc/docker/certs.d/$d Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: done Jan 26 00:10:10 crc kubenswrapper[5124]: sleep 60 & wait ${!} Jan 26 00:10:10 crc kubenswrapper[5124]: done Jan 26 00:10:10 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6jf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-6grfh_openshift-image-registry(45a1a609-6066-42a0-a450-b0e70365aa9b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:10 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.760827 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-6grfh" podUID="45a1a609-6066-42a0-a450-b0e70365aa9b" Jan 26 00:10:10 crc kubenswrapper[5124]: W0126 00:10:10.766409 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e5684ab_0b94_4eef_af30_0c6c4ab528af.slice/crio-71e13edc858fe7591c6c4d670324afeddda1eac0c0ee89df1b2003fbffb2db76 WatchSource:0}: Error finding container 71e13edc858fe7591c6c4d670324afeddda1eac0c0ee89df1b2003fbffb2db76: Status 404 returned error can't find the container with id 71e13edc858fe7591c6c4d670324afeddda1eac0c0ee89df1b2003fbffb2db76 Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.767954 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.768181 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:10 crc kubenswrapper[5124]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:10 crc kubenswrapper[5124]: set -uo pipefail Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 26 00:10:10 crc kubenswrapper[5124]: HOSTS_FILE="/etc/hosts" Jan 26 00:10:10 crc kubenswrapper[5124]: TEMP_FILE="/tmp/hosts.tmp" Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: # Make a temporary file with the old hosts file's attributes. Jan 26 00:10:10 crc kubenswrapper[5124]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 26 00:10:10 crc kubenswrapper[5124]: echo "Failed to preserve hosts file. Exiting." Jan 26 00:10:10 crc kubenswrapper[5124]: exit 1 Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: while true; do Jan 26 00:10:10 crc kubenswrapper[5124]: declare -A svc_ips Jan 26 00:10:10 crc kubenswrapper[5124]: for svc in "${services[@]}"; do Jan 26 00:10:10 crc kubenswrapper[5124]: # Fetch service IP from cluster dns if present. We make several tries Jan 26 00:10:10 crc kubenswrapper[5124]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 26 00:10:10 crc kubenswrapper[5124]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 26 00:10:10 crc kubenswrapper[5124]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 26 00:10:10 crc kubenswrapper[5124]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:10 crc kubenswrapper[5124]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:10 crc kubenswrapper[5124]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:10 crc kubenswrapper[5124]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 26 00:10:10 crc kubenswrapper[5124]: for i in ${!cmds[*]} Jan 26 00:10:10 crc kubenswrapper[5124]: do Jan 26 00:10:10 crc kubenswrapper[5124]: ips=($(eval "${cmds[i]}")) Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: svc_ips["${svc}"]="${ips[@]}" Jan 26 00:10:10 crc kubenswrapper[5124]: break Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: done Jan 26 00:10:10 crc kubenswrapper[5124]: done Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: # Update /etc/hosts only if we get valid service IPs Jan 26 00:10:10 crc kubenswrapper[5124]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 26 00:10:10 crc kubenswrapper[5124]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 26 00:10:10 crc kubenswrapper[5124]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 26 00:10:10 crc kubenswrapper[5124]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 26 00:10:10 crc kubenswrapper[5124]: sleep 60 & wait Jan 26 00:10:10 crc kubenswrapper[5124]: continue Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: # Append resolver entries for services Jan 26 00:10:10 crc kubenswrapper[5124]: rc=0 Jan 26 00:10:10 crc kubenswrapper[5124]: for svc in "${!svc_ips[@]}"; do Jan 26 00:10:10 crc kubenswrapper[5124]: for ip in ${svc_ips[${svc}]}; do Jan 26 00:10:10 crc kubenswrapper[5124]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 26 00:10:10 crc kubenswrapper[5124]: done Jan 26 00:10:10 crc kubenswrapper[5124]: done Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ $rc -ne 0 ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: sleep 60 & wait Jan 26 00:10:10 crc kubenswrapper[5124]: continue Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 26 00:10:10 crc kubenswrapper[5124]: # Replace /etc/hosts with our modified version if needed Jan 26 00:10:10 crc kubenswrapper[5124]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 26 00:10:10 crc kubenswrapper[5124]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: sleep 60 & wait Jan 26 00:10:10 crc kubenswrapper[5124]: unset svc_ips Jan 26 00:10:10 crc kubenswrapper[5124]: done Jan 26 00:10:10 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dd8d8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-cwsts_openshift-dns(9e5684ab-0b94-4eef-af30-0c6c4ab528af): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:10 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.770720 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-cwsts" podUID="9e5684ab-0b94-4eef-af30-0c6c4ab528af" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.781327 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.781360 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.781371 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.781388 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.781399 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:10Z","lastTransitionTime":"2026-01-26T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.782842 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-87scd" Jan 26 00:10:10 crc kubenswrapper[5124]: W0126 00:10:10.782921 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8660dad9_43c8_4c00_872a_e00a6baab0f7.slice/crio-4add0094d46275fe3cf880709f994c2de148adc310aa03ed67888ed05f96abd1 WatchSource:0}: Error finding container 4add0094d46275fe3cf880709f994c2de148adc310aa03ed67888ed05f96abd1: Status 404 returned error can't find the container with id 4add0094d46275fe3cf880709f994c2de148adc310aa03ed67888ed05f96abd1 Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.787799 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:10 crc kubenswrapper[5124]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:10 crc kubenswrapper[5124]: set -euo pipefail Jan 26 00:10:10 crc kubenswrapper[5124]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 26 00:10:10 crc kubenswrapper[5124]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 26 00:10:10 crc kubenswrapper[5124]: # As the secret mount is optional we must wait for the files to be present. Jan 26 00:10:10 crc kubenswrapper[5124]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 26 00:10:10 crc kubenswrapper[5124]: TS=$(date +%s) Jan 26 00:10:10 crc kubenswrapper[5124]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 26 00:10:10 crc kubenswrapper[5124]: HAS_LOGGED_INFO=0 Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: log_missing_certs(){ Jan 26 00:10:10 crc kubenswrapper[5124]: CUR_TS=$(date +%s) Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 26 00:10:10 crc kubenswrapper[5124]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 26 00:10:10 crc kubenswrapper[5124]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 26 00:10:10 crc kubenswrapper[5124]: HAS_LOGGED_INFO=1 Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: } Jan 26 00:10:10 crc kubenswrapper[5124]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 26 00:10:10 crc kubenswrapper[5124]: log_missing_certs Jan 26 00:10:10 crc kubenswrapper[5124]: sleep 5 Jan 26 00:10:10 crc kubenswrapper[5124]: done Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 26 00:10:10 crc kubenswrapper[5124]: exec /usr/bin/kube-rbac-proxy \ Jan 26 00:10:10 crc kubenswrapper[5124]: --logtostderr \ Jan 26 00:10:10 crc kubenswrapper[5124]: --secure-listen-address=:9108 \ Jan 26 00:10:10 crc kubenswrapper[5124]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 26 00:10:10 crc kubenswrapper[5124]: --upstream=http://127.0.0.1:29108/ \ Jan 26 00:10:10 crc kubenswrapper[5124]: --tls-private-key-file=${TLS_PK} \ Jan 26 00:10:10 crc kubenswrapper[5124]: --tls-cert-file=${TLS_CERT} Jan 26 00:10:10 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx9l8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-mpdlk_openshift-ovn-kubernetes(8660dad9-43c8-4c00-872a-e00a6baab0f7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:10 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.791301 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:10 crc kubenswrapper[5124]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: set -o allexport Jan 26 00:10:10 crc kubenswrapper[5124]: source "/env/_master" Jan 26 00:10:10 crc kubenswrapper[5124]: set +o allexport Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: ovn_v4_join_subnet_opt= Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "" != "" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: ovn_v6_join_subnet_opt= Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "" != "" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: ovn_v4_transit_switch_subnet_opt= Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "" != "" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: ovn_v6_transit_switch_subnet_opt= Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "" != "" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: dns_name_resolver_enabled_flag= Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "false" == "true" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: # This is needed so that converting clusters from GA to TP Jan 26 00:10:10 crc kubenswrapper[5124]: # will rollout control plane pods as well Jan 26 00:10:10 crc kubenswrapper[5124]: network_segmentation_enabled_flag= Jan 26 00:10:10 crc kubenswrapper[5124]: multi_network_enabled_flag= Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "true" == "true" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "true" == "true" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "true" != "true" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: route_advertisements_enable_flag= Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "false" == "true" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: preconfigured_udn_addresses_enable_flag= Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "false" == "true" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: # Enable multi-network policy if configured (control-plane always full mode) Jan 26 00:10:10 crc kubenswrapper[5124]: multi_network_policy_enabled_flag= Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "false" == "true" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: # Enable admin network policy if configured (control-plane always full mode) Jan 26 00:10:10 crc kubenswrapper[5124]: admin_network_policy_enabled_flag= Jan 26 00:10:10 crc kubenswrapper[5124]: if [[ "true" == "true" ]]; then Jan 26 00:10:10 crc kubenswrapper[5124]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: if [ "shared" == "shared" ]; then Jan 26 00:10:10 crc kubenswrapper[5124]: gateway_mode_flags="--gateway-mode shared" Jan 26 00:10:10 crc kubenswrapper[5124]: elif [ "shared" == "local" ]; then Jan 26 00:10:10 crc kubenswrapper[5124]: gateway_mode_flags="--gateway-mode local" Jan 26 00:10:10 crc kubenswrapper[5124]: else Jan 26 00:10:10 crc kubenswrapper[5124]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 26 00:10:10 crc kubenswrapper[5124]: exit 1 Jan 26 00:10:10 crc kubenswrapper[5124]: fi Jan 26 00:10:10 crc kubenswrapper[5124]: Jan 26 00:10:10 crc kubenswrapper[5124]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 26 00:10:10 crc kubenswrapper[5124]: exec /usr/bin/ovnkube \ Jan 26 00:10:10 crc kubenswrapper[5124]: --enable-interconnect \ Jan 26 00:10:10 crc kubenswrapper[5124]: --init-cluster-manager "${K8S_NODE}" \ Jan 26 00:10:10 crc kubenswrapper[5124]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 26 00:10:10 crc kubenswrapper[5124]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 26 00:10:10 crc kubenswrapper[5124]: --metrics-bind-address "127.0.0.1:29108" \ Jan 26 00:10:10 crc kubenswrapper[5124]: --metrics-enable-pprof \ Jan 26 00:10:10 crc kubenswrapper[5124]: --metrics-enable-config-duration \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${ovn_v4_join_subnet_opt} \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${ovn_v6_join_subnet_opt} \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${dns_name_resolver_enabled_flag} \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${persistent_ips_enabled_flag} \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${multi_network_enabled_flag} \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${network_segmentation_enabled_flag} \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${gateway_mode_flags} \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${route_advertisements_enable_flag} \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${preconfigured_udn_addresses_enable_flag} \ Jan 26 00:10:10 crc kubenswrapper[5124]: --enable-egress-ip=true \ Jan 26 00:10:10 crc kubenswrapper[5124]: --enable-egress-firewall=true \ Jan 26 00:10:10 crc kubenswrapper[5124]: --enable-egress-qos=true \ Jan 26 00:10:10 crc kubenswrapper[5124]: --enable-egress-service=true \ Jan 26 00:10:10 crc kubenswrapper[5124]: --enable-multicast \ Jan 26 00:10:10 crc kubenswrapper[5124]: --enable-multi-external-gateway=true \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${multi_network_policy_enabled_flag} \ Jan 26 00:10:10 crc kubenswrapper[5124]: ${admin_network_policy_enabled_flag} Jan 26 00:10:10 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx9l8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-mpdlk_openshift-ovn-kubernetes(8660dad9-43c8-4c00-872a-e00a6baab0f7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:10 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.792957 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" podUID="8660dad9-43c8-4c00-872a-e00a6baab0f7" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.797755 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:10 crc kubenswrapper[5124]: W0126 00:10:10.797834 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c96023c_09ac_49d0_b8bd_09f46f6d9655.slice/crio-e22d7c6e98b218787ff2b2afc825968abe6f6b5b80288a536e7e6213a1a4109f WatchSource:0}: Error finding container e22d7c6e98b218787ff2b2afc825968abe6f6b5b80288a536e7e6213a1a4109f: Status 404 returned error can't find the container with id e22d7c6e98b218787ff2b2afc825968abe6f6b5b80288a536e7e6213a1a4109f Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.802366 5124 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb6p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-87scd_openshift-multus(5c96023c-09ac-49d0-b8bd-09f46f6d9655): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.803717 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-87scd" podUID="5c96023c-09ac-49d0-b8bd-09f46f6d9655" Jan 26 00:10:10 crc kubenswrapper[5124]: W0126 00:10:10.809117 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd13181a0_d54a_460b_bbc7_4948fb1a4eaf.slice/crio-a4b6f731862c59616a6d616cabe04b020f0309fb492113732c1e390cdc8eada8 WatchSource:0}: Error finding container a4b6f731862c59616a6d616cabe04b020f0309fb492113732c1e390cdc8eada8: Status 404 returned error can't find the container with id a4b6f731862c59616a6d616cabe04b020f0309fb492113732c1e390cdc8eada8 Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.811260 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:10 crc kubenswrapper[5124]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 26 00:10:10 crc kubenswrapper[5124]: apiVersion: v1 Jan 26 00:10:10 crc kubenswrapper[5124]: clusters: Jan 26 00:10:10 crc kubenswrapper[5124]: - cluster: Jan 26 00:10:10 crc kubenswrapper[5124]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 26 00:10:10 crc kubenswrapper[5124]: server: https://api-int.crc.testing:6443 Jan 26 00:10:10 crc kubenswrapper[5124]: name: default-cluster Jan 26 00:10:10 crc kubenswrapper[5124]: contexts: Jan 26 00:10:10 crc kubenswrapper[5124]: - context: Jan 26 00:10:10 crc kubenswrapper[5124]: cluster: default-cluster Jan 26 00:10:10 crc kubenswrapper[5124]: namespace: default Jan 26 00:10:10 crc kubenswrapper[5124]: user: default-auth Jan 26 00:10:10 crc kubenswrapper[5124]: name: default-context Jan 26 00:10:10 crc kubenswrapper[5124]: current-context: default-context Jan 26 00:10:10 crc kubenswrapper[5124]: kind: Config Jan 26 00:10:10 crc kubenswrapper[5124]: preferences: {} Jan 26 00:10:10 crc kubenswrapper[5124]: users: Jan 26 00:10:10 crc kubenswrapper[5124]: - name: default-auth Jan 26 00:10:10 crc kubenswrapper[5124]: user: Jan 26 00:10:10 crc kubenswrapper[5124]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:10 crc kubenswrapper[5124]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:10 crc kubenswrapper[5124]: EOF Jan 26 00:10:10 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sphjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-sdh5t_openshift-ovn-kubernetes(d13181a0-d54a-460b-bbc7-4948fb1a4eaf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:10 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.813135 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.829320 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.837555 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-smnb7" Jan 26 00:10:10 crc kubenswrapper[5124]: W0126 00:10:10.839012 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95fa0656_150a_4d93_a324_77a1306d91f7.slice/crio-c26cd6824c0dfa04217e5fe542360d08a7c5b42e1a00bc6f44b0235c2a0edc04 WatchSource:0}: Error finding container c26cd6824c0dfa04217e5fe542360d08a7c5b42e1a00bc6f44b0235c2a0edc04: Status 404 returned error can't find the container with id c26cd6824c0dfa04217e5fe542360d08a7c5b42e1a00bc6f44b0235c2a0edc04 Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.842978 5124 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xt6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-kmxcn_openshift-machine-config-operator(95fa0656-150a-4d93-a324-77a1306d91f7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.845676 5124 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xt6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-kmxcn_openshift-machine-config-operator(95fa0656-150a-4d93-a324-77a1306d91f7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.846822 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" Jan 26 00:10:10 crc kubenswrapper[5124]: W0126 00:10:10.848877 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf826f136_a910_4120_aa62_a08e427590c0.slice/crio-9d0f4c925a99c2cdd2143c7328d9fd5aa5da604d964d606ef0266f918c0dbd75 WatchSource:0}: Error finding container 9d0f4c925a99c2cdd2143c7328d9fd5aa5da604d964d606ef0266f918c0dbd75: Status 404 returned error can't find the container with id 9d0f4c925a99c2cdd2143c7328d9fd5aa5da604d964d606ef0266f918c0dbd75 Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.851741 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:10 crc kubenswrapper[5124]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 26 00:10:10 crc kubenswrapper[5124]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 26 00:10:10 crc kubenswrapper[5124]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbqfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-smnb7_openshift-multus(f826f136-a910-4120-aa62-a08e427590c0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:10 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[5124]: E0126 00:10:10.853153 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-smnb7" podUID="f826f136-a910-4120-aa62-a08e427590c0" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.884058 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.884174 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.884201 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.884235 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.884262 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:10Z","lastTransitionTime":"2026-01-26T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.986445 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.986744 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.986923 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.987021 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:10 crc kubenswrapper[5124]: I0126 00:10:10.987138 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:10Z","lastTransitionTime":"2026-01-26T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.035646 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.036016 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.036158 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.036279 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.035860 5124 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.036205 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.036670 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.036690 5124 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.036286 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.036768 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.036781 5124 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.036359 5124 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.036631 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:12.036608315 +0000 UTC m=+89.945527674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.037051 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:12.036992175 +0000 UTC m=+89.945911564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.037112 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:12.037094159 +0000 UTC m=+89.946013548 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.037150 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:12.03713852 +0000 UTC m=+89.946057909 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.089333 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.089377 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.089387 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.089404 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.089415 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.137851 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.138235 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:12.138181209 +0000 UTC m=+90.047100598 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.191665 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.191715 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.191727 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.191744 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.191754 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.239185 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs\") pod \"network-metrics-daemon-sctbw\" (UID: \"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\") " pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.239356 5124 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.239431 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs podName:08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b nodeName:}" failed. No retries permitted until 2026-01-26 00:10:12.239415323 +0000 UTC m=+90.148334672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs") pod "network-metrics-daemon-sctbw" (UID: "08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.294407 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.294486 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.294511 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.294539 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.294561 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.396728 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.396772 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.396786 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.396804 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.396815 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.499564 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.499672 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.499698 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.499733 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.499759 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.603227 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.603312 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.603340 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.603371 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.603394 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.673650 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6grfh" event={"ID":"45a1a609-6066-42a0-a450-b0e70365aa9b","Type":"ContainerStarted","Data":"f8154497ef78652105df3db962a47e28d2ab3e899e11f0c1dad70179435b89c9"} Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.678672 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:11 crc kubenswrapper[5124]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 26 00:10:11 crc kubenswrapper[5124]: while [ true ]; Jan 26 00:10:11 crc kubenswrapper[5124]: do Jan 26 00:10:11 crc kubenswrapper[5124]: for f in $(ls /tmp/serviceca); do Jan 26 00:10:11 crc kubenswrapper[5124]: echo $f Jan 26 00:10:11 crc kubenswrapper[5124]: ca_file_path="/tmp/serviceca/${f}" Jan 26 00:10:11 crc kubenswrapper[5124]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 26 00:10:11 crc kubenswrapper[5124]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 26 00:10:11 crc kubenswrapper[5124]: if [ -e "${reg_dir_path}" ]; then Jan 26 00:10:11 crc kubenswrapper[5124]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:11 crc kubenswrapper[5124]: else Jan 26 00:10:11 crc kubenswrapper[5124]: mkdir $reg_dir_path Jan 26 00:10:11 crc kubenswrapper[5124]: cp $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: done Jan 26 00:10:11 crc kubenswrapper[5124]: for d in $(ls /etc/docker/certs.d); do Jan 26 00:10:11 crc kubenswrapper[5124]: echo $d Jan 26 00:10:11 crc kubenswrapper[5124]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 26 00:10:11 crc kubenswrapper[5124]: reg_conf_path="/tmp/serviceca/${dp}" Jan 26 00:10:11 crc kubenswrapper[5124]: if [ ! -e "${reg_conf_path}" ]; then Jan 26 00:10:11 crc kubenswrapper[5124]: rm -rf /etc/docker/certs.d/$d Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: done Jan 26 00:10:11 crc kubenswrapper[5124]: sleep 60 & wait ${!} Jan 26 00:10:11 crc kubenswrapper[5124]: done Jan 26 00:10:11 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6jf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-6grfh_openshift-image-registry(45a1a609-6066-42a0-a450-b0e70365aa9b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:11 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.677747 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" event={"ID":"8660dad9-43c8-4c00-872a-e00a6baab0f7","Type":"ContainerStarted","Data":"4add0094d46275fe3cf880709f994c2de148adc310aa03ed67888ed05f96abd1"} Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.680270 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-6grfh" podUID="45a1a609-6066-42a0-a450-b0e70365aa9b" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.681038 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"93ce8f9318165ac8c39243e11a7df39f27b5dbe551261f07c84767e817d1b4ad"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.681286 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-smnb7" event={"ID":"f826f136-a910-4120-aa62-a08e427590c0","Type":"ContainerStarted","Data":"9d0f4c925a99c2cdd2143c7328d9fd5aa5da604d964d606ef0266f918c0dbd75"} Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.684260 5124 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.684507 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" event={"ID":"5c96023c-09ac-49d0-b8bd-09f46f6d9655","Type":"ContainerStarted","Data":"e22d7c6e98b218787ff2b2afc825968abe6f6b5b80288a536e7e6213a1a4109f"} Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.685417 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.686097 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerStarted","Data":"a4b6f731862c59616a6d616cabe04b020f0309fb492113732c1e390cdc8eada8"} Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.686131 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:11 crc kubenswrapper[5124]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:11 crc kubenswrapper[5124]: set -euo pipefail Jan 26 00:10:11 crc kubenswrapper[5124]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 26 00:10:11 crc kubenswrapper[5124]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 26 00:10:11 crc kubenswrapper[5124]: # As the secret mount is optional we must wait for the files to be present. Jan 26 00:10:11 crc kubenswrapper[5124]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 26 00:10:11 crc kubenswrapper[5124]: TS=$(date +%s) Jan 26 00:10:11 crc kubenswrapper[5124]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 26 00:10:11 crc kubenswrapper[5124]: HAS_LOGGED_INFO=0 Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: log_missing_certs(){ Jan 26 00:10:11 crc kubenswrapper[5124]: CUR_TS=$(date +%s) Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 26 00:10:11 crc kubenswrapper[5124]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 26 00:10:11 crc kubenswrapper[5124]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 26 00:10:11 crc kubenswrapper[5124]: HAS_LOGGED_INFO=1 Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: } Jan 26 00:10:11 crc kubenswrapper[5124]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 26 00:10:11 crc kubenswrapper[5124]: log_missing_certs Jan 26 00:10:11 crc kubenswrapper[5124]: sleep 5 Jan 26 00:10:11 crc kubenswrapper[5124]: done Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 26 00:10:11 crc kubenswrapper[5124]: exec /usr/bin/kube-rbac-proxy \ Jan 26 00:10:11 crc kubenswrapper[5124]: --logtostderr \ Jan 26 00:10:11 crc kubenswrapper[5124]: --secure-listen-address=:9108 \ Jan 26 00:10:11 crc kubenswrapper[5124]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 26 00:10:11 crc kubenswrapper[5124]: --upstream=http://127.0.0.1:29108/ \ Jan 26 00:10:11 crc kubenswrapper[5124]: --tls-private-key-file=${TLS_PK} \ Jan 26 00:10:11 crc kubenswrapper[5124]: --tls-cert-file=${TLS_CERT} Jan 26 00:10:11 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx9l8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-mpdlk_openshift-ovn-kubernetes(8660dad9-43c8-4c00-872a-e00a6baab0f7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:11 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.687077 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:11 crc kubenswrapper[5124]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 26 00:10:11 crc kubenswrapper[5124]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 26 00:10:11 crc kubenswrapper[5124]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbqfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-smnb7_openshift-multus(f826f136-a910-4120-aa62-a08e427590c0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:11 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.689504 5124 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb6p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-87scd_openshift-multus(5c96023c-09ac-49d0-b8bd-09f46f6d9655): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.689610 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-smnb7" podUID="f826f136-a910-4120-aa62-a08e427590c0" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.690219 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:11 crc kubenswrapper[5124]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: set -o allexport Jan 26 00:10:11 crc kubenswrapper[5124]: source "/env/_master" Jan 26 00:10:11 crc kubenswrapper[5124]: set +o allexport Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: ovn_v4_join_subnet_opt= Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "" != "" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: ovn_v6_join_subnet_opt= Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "" != "" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: ovn_v4_transit_switch_subnet_opt= Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "" != "" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: ovn_v6_transit_switch_subnet_opt= Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "" != "" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: dns_name_resolver_enabled_flag= Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "false" == "true" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: # This is needed so that converting clusters from GA to TP Jan 26 00:10:11 crc kubenswrapper[5124]: # will rollout control plane pods as well Jan 26 00:10:11 crc kubenswrapper[5124]: network_segmentation_enabled_flag= Jan 26 00:10:11 crc kubenswrapper[5124]: multi_network_enabled_flag= Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "true" == "true" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "true" == "true" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "true" != "true" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: route_advertisements_enable_flag= Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "false" == "true" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: preconfigured_udn_addresses_enable_flag= Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "false" == "true" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: # Enable multi-network policy if configured (control-plane always full mode) Jan 26 00:10:11 crc kubenswrapper[5124]: multi_network_policy_enabled_flag= Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "false" == "true" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: # Enable admin network policy if configured (control-plane always full mode) Jan 26 00:10:11 crc kubenswrapper[5124]: admin_network_policy_enabled_flag= Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "true" == "true" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: if [ "shared" == "shared" ]; then Jan 26 00:10:11 crc kubenswrapper[5124]: gateway_mode_flags="--gateway-mode shared" Jan 26 00:10:11 crc kubenswrapper[5124]: elif [ "shared" == "local" ]; then Jan 26 00:10:11 crc kubenswrapper[5124]: gateway_mode_flags="--gateway-mode local" Jan 26 00:10:11 crc kubenswrapper[5124]: else Jan 26 00:10:11 crc kubenswrapper[5124]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 26 00:10:11 crc kubenswrapper[5124]: exit 1 Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 26 00:10:11 crc kubenswrapper[5124]: exec /usr/bin/ovnkube \ Jan 26 00:10:11 crc kubenswrapper[5124]: --enable-interconnect \ Jan 26 00:10:11 crc kubenswrapper[5124]: --init-cluster-manager "${K8S_NODE}" \ Jan 26 00:10:11 crc kubenswrapper[5124]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 26 00:10:11 crc kubenswrapper[5124]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 26 00:10:11 crc kubenswrapper[5124]: --metrics-bind-address "127.0.0.1:29108" \ Jan 26 00:10:11 crc kubenswrapper[5124]: --metrics-enable-pprof \ Jan 26 00:10:11 crc kubenswrapper[5124]: --metrics-enable-config-duration \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${ovn_v4_join_subnet_opt} \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${ovn_v6_join_subnet_opt} \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${dns_name_resolver_enabled_flag} \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${persistent_ips_enabled_flag} \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${multi_network_enabled_flag} \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${network_segmentation_enabled_flag} \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${gateway_mode_flags} \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${route_advertisements_enable_flag} \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${preconfigured_udn_addresses_enable_flag} \ Jan 26 00:10:11 crc kubenswrapper[5124]: --enable-egress-ip=true \ Jan 26 00:10:11 crc kubenswrapper[5124]: --enable-egress-firewall=true \ Jan 26 00:10:11 crc kubenswrapper[5124]: --enable-egress-qos=true \ Jan 26 00:10:11 crc kubenswrapper[5124]: --enable-egress-service=true \ Jan 26 00:10:11 crc kubenswrapper[5124]: --enable-multicast \ Jan 26 00:10:11 crc kubenswrapper[5124]: --enable-multi-external-gateway=true \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${multi_network_policy_enabled_flag} \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${admin_network_policy_enabled_flag} Jan 26 00:10:11 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx9l8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-mpdlk_openshift-ovn-kubernetes(8660dad9-43c8-4c00-872a-e00a6baab0f7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:11 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.690259 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:11 crc kubenswrapper[5124]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 26 00:10:11 crc kubenswrapper[5124]: apiVersion: v1 Jan 26 00:10:11 crc kubenswrapper[5124]: clusters: Jan 26 00:10:11 crc kubenswrapper[5124]: - cluster: Jan 26 00:10:11 crc kubenswrapper[5124]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 26 00:10:11 crc kubenswrapper[5124]: server: https://api-int.crc.testing:6443 Jan 26 00:10:11 crc kubenswrapper[5124]: name: default-cluster Jan 26 00:10:11 crc kubenswrapper[5124]: contexts: Jan 26 00:10:11 crc kubenswrapper[5124]: - context: Jan 26 00:10:11 crc kubenswrapper[5124]: cluster: default-cluster Jan 26 00:10:11 crc kubenswrapper[5124]: namespace: default Jan 26 00:10:11 crc kubenswrapper[5124]: user: default-auth Jan 26 00:10:11 crc kubenswrapper[5124]: name: default-context Jan 26 00:10:11 crc kubenswrapper[5124]: current-context: default-context Jan 26 00:10:11 crc kubenswrapper[5124]: kind: Config Jan 26 00:10:11 crc kubenswrapper[5124]: preferences: {} Jan 26 00:10:11 crc kubenswrapper[5124]: users: Jan 26 00:10:11 crc kubenswrapper[5124]: - name: default-auth Jan 26 00:10:11 crc kubenswrapper[5124]: user: Jan 26 00:10:11 crc kubenswrapper[5124]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:11 crc kubenswrapper[5124]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:11 crc kubenswrapper[5124]: EOF Jan 26 00:10:11 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sphjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-sdh5t_openshift-ovn-kubernetes(d13181a0-d54a-460b-bbc7-4948fb1a4eaf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:11 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.690476 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"b46f80af12a8385f27b22e0e0b2bb9199015021a1612ea89ca081c38ad8cdfa2"} Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.690771 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-87scd" podUID="5c96023c-09ac-49d0-b8bd-09f46f6d9655" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.691294 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" podUID="8660dad9-43c8-4c00-872a-e00a6baab0f7" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.691331 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.691722 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:11 crc kubenswrapper[5124]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:11 crc kubenswrapper[5124]: set -o allexport Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: source /etc/kubernetes/apiserver-url.env Jan 26 00:10:11 crc kubenswrapper[5124]: else Jan 26 00:10:11 crc kubenswrapper[5124]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 26 00:10:11 crc kubenswrapper[5124]: exit 1 Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 26 00:10:11 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:11 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.692779 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.693177 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"acf288bcd2bf165b0f0a8fd4e88910edf86305c3bc7f7071c089830025f7da13"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.693934 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.694366 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:11 crc kubenswrapper[5124]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: set -o allexport Jan 26 00:10:11 crc kubenswrapper[5124]: source "/env/_master" Jan 26 00:10:11 crc kubenswrapper[5124]: set +o allexport Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 26 00:10:11 crc kubenswrapper[5124]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 26 00:10:11 crc kubenswrapper[5124]: ho_enable="--enable-hybrid-overlay" Jan 26 00:10:11 crc kubenswrapper[5124]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 26 00:10:11 crc kubenswrapper[5124]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 26 00:10:11 crc kubenswrapper[5124]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 26 00:10:11 crc kubenswrapper[5124]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:11 crc kubenswrapper[5124]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 26 00:10:11 crc kubenswrapper[5124]: --webhook-host=127.0.0.1 \ Jan 26 00:10:11 crc kubenswrapper[5124]: --webhook-port=9743 \ Jan 26 00:10:11 crc kubenswrapper[5124]: ${ho_enable} \ Jan 26 00:10:11 crc kubenswrapper[5124]: --enable-interconnect \ Jan 26 00:10:11 crc kubenswrapper[5124]: --disable-approver \ Jan 26 00:10:11 crc kubenswrapper[5124]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 26 00:10:11 crc kubenswrapper[5124]: --wait-for-kubernetes-api=200s \ Jan 26 00:10:11 crc kubenswrapper[5124]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 26 00:10:11 crc kubenswrapper[5124]: --loglevel="${LOGLEVEL}" Jan 26 00:10:11 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:11 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.696004 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerStarted","Data":"c26cd6824c0dfa04217e5fe542360d08a7c5b42e1a00bc6f44b0235c2a0edc04"} Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.696948 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:11 crc kubenswrapper[5124]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: set -o allexport Jan 26 00:10:11 crc kubenswrapper[5124]: source "/env/_master" Jan 26 00:10:11 crc kubenswrapper[5124]: set +o allexport Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 26 00:10:11 crc kubenswrapper[5124]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:11 crc kubenswrapper[5124]: --disable-webhook \ Jan 26 00:10:11 crc kubenswrapper[5124]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 26 00:10:11 crc kubenswrapper[5124]: --loglevel="${LOGLEVEL}" Jan 26 00:10:11 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:11 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.697868 5124 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xt6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-kmxcn_openshift-machine-config-operator(95fa0656-150a-4d93-a324-77a1306d91f7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.698719 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.699558 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cwsts" event={"ID":"9e5684ab-0b94-4eef-af30-0c6c4ab528af","Type":"ContainerStarted","Data":"71e13edc858fe7591c6c4d670324afeddda1eac0c0ee89df1b2003fbffb2db76"} Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.701094 5124 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xt6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-kmxcn_openshift-machine-config-operator(95fa0656-150a-4d93-a324-77a1306d91f7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.701266 5124 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:11 crc kubenswrapper[5124]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:11 crc kubenswrapper[5124]: set -uo pipefail Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 26 00:10:11 crc kubenswrapper[5124]: HOSTS_FILE="/etc/hosts" Jan 26 00:10:11 crc kubenswrapper[5124]: TEMP_FILE="/tmp/hosts.tmp" Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: # Make a temporary file with the old hosts file's attributes. Jan 26 00:10:11 crc kubenswrapper[5124]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 26 00:10:11 crc kubenswrapper[5124]: echo "Failed to preserve hosts file. Exiting." Jan 26 00:10:11 crc kubenswrapper[5124]: exit 1 Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: while true; do Jan 26 00:10:11 crc kubenswrapper[5124]: declare -A svc_ips Jan 26 00:10:11 crc kubenswrapper[5124]: for svc in "${services[@]}"; do Jan 26 00:10:11 crc kubenswrapper[5124]: # Fetch service IP from cluster dns if present. We make several tries Jan 26 00:10:11 crc kubenswrapper[5124]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 26 00:10:11 crc kubenswrapper[5124]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 26 00:10:11 crc kubenswrapper[5124]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 26 00:10:11 crc kubenswrapper[5124]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:11 crc kubenswrapper[5124]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:11 crc kubenswrapper[5124]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:11 crc kubenswrapper[5124]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 26 00:10:11 crc kubenswrapper[5124]: for i in ${!cmds[*]} Jan 26 00:10:11 crc kubenswrapper[5124]: do Jan 26 00:10:11 crc kubenswrapper[5124]: ips=($(eval "${cmds[i]}")) Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: svc_ips["${svc}"]="${ips[@]}" Jan 26 00:10:11 crc kubenswrapper[5124]: break Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: done Jan 26 00:10:11 crc kubenswrapper[5124]: done Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: # Update /etc/hosts only if we get valid service IPs Jan 26 00:10:11 crc kubenswrapper[5124]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 26 00:10:11 crc kubenswrapper[5124]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 26 00:10:11 crc kubenswrapper[5124]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 26 00:10:11 crc kubenswrapper[5124]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 26 00:10:11 crc kubenswrapper[5124]: sleep 60 & wait Jan 26 00:10:11 crc kubenswrapper[5124]: continue Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: # Append resolver entries for services Jan 26 00:10:11 crc kubenswrapper[5124]: rc=0 Jan 26 00:10:11 crc kubenswrapper[5124]: for svc in "${!svc_ips[@]}"; do Jan 26 00:10:11 crc kubenswrapper[5124]: for ip in ${svc_ips[${svc}]}; do Jan 26 00:10:11 crc kubenswrapper[5124]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 26 00:10:11 crc kubenswrapper[5124]: done Jan 26 00:10:11 crc kubenswrapper[5124]: done Jan 26 00:10:11 crc kubenswrapper[5124]: if [[ $rc -ne 0 ]]; then Jan 26 00:10:11 crc kubenswrapper[5124]: sleep 60 & wait Jan 26 00:10:11 crc kubenswrapper[5124]: continue Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: Jan 26 00:10:11 crc kubenswrapper[5124]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 26 00:10:11 crc kubenswrapper[5124]: # Replace /etc/hosts with our modified version if needed Jan 26 00:10:11 crc kubenswrapper[5124]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 26 00:10:11 crc kubenswrapper[5124]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 26 00:10:11 crc kubenswrapper[5124]: fi Jan 26 00:10:11 crc kubenswrapper[5124]: sleep 60 & wait Jan 26 00:10:11 crc kubenswrapper[5124]: unset svc_ips Jan 26 00:10:11 crc kubenswrapper[5124]: done Jan 26 00:10:11 crc kubenswrapper[5124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dd8d8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-cwsts_openshift-dns(9e5684ab-0b94-4eef-af30-0c6c4ab528af): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:11 crc kubenswrapper[5124]: > logger="UnhandledError" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.702296 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.702354 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-cwsts" podUID="9e5684ab-0b94-4eef-af30-0c6c4ab528af" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.705373 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.705437 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.705464 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.705499 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.705535 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.706362 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.721515 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c96023c-09ac-49d0-b8bd-09f46f6d9655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-87scd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.732213 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95fa0656-150a-4d93-a324-77a1306d91f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xt6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xt6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kmxcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.742656 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e4f768-137c-4c5c-878d-3852f54a6df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4382e3a3d54a3ceaf116dd5c6f7f458833943f7e948dc335bc038b3267463d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.756207 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fa44516-2654-456d-893a-96341101557c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:01.118231 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:01.118416 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:01.121827 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107044536/tls.crt::/tmp/serving-cert-107044536/tls.key\\\\\\\"\\\\nI0126 00:10:01.529054 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:01.532621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:01.532658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:01.532703 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:01.532730 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:01.539927 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:01.539960 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:01.539981 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.539994 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.540005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:01.540013 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:01.540020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:01.540025 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:01.543048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.767828 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.778120 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sctbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sctbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.803339 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df9bb628-c0ff-4254-8f43-66c1d289b343\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://94702470d0dd24faac34520e06613c5897b79dde56d2897fabe3a52050980120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4647208e6c84a5a6977c9b5f4a59a5a2ec2b2957cb47ea0707851ab13bef96ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b64f819f442260b8aaac091fe6a09b99175d27d2ec944332d5977a5ca5af58f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d6d8389d6d15bd747b8ef74dc30f010429f962e34fe75b84935720929eab5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd70d62ee532dd5a0aa8e04beb99f336153670709121aa892e5fa90aca675a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.807245 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.807284 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.807295 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.807309 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.807319 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.814882 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.823924 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.831228 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8660dad9-43c8-4c00-872a-e00a6baab0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx9l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx9l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-mpdlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.841174 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-smnb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f826f136-a910-4120-aa62-a08e427590c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbqfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-smnb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.849969 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e553c5-fff7-48ff-8b44-c86ab881b7bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8da7cf7985b3076f734741cd805f8a4f273d7620fc89a9f9d02fa906489960c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://56cb10ea63f74e8cb16b42dc94949b4ddf748e8fdf73c942fb868db9001364e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27a7b88896a26f50315b57e5bff7d5ec0511f09f0acb636c09e3c76caf1c686b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.858742 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.867675 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6grfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a1a609-6066-42a0-a450-b0e70365aa9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6jf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6grfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.876529 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-cwsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e5684ab-0b94-4eef-af30-0c6c4ab528af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd8d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cwsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.892409 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sdh5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.908666 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.908716 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.908726 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.908738 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.908749 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.915939 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c99ae5-3448-4d7b-9141-781a3683de72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a04fa4d6993fe4e83a7bd2d552bb16d9dc8e33e89a789170b8fec180c65b793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a4d65f95ca5f832e6ac85de46fd3d474221c3263ab1c2eba3123e4742fc5287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da17ce8ac77c94210b966d6bc7b376e82189a903321c9800662d2c12abf965d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://effeb6003c974dc677094f47337b7bf2ba1dad9209e7f72af53b5ac7d069f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.930248 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.941805 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6grfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a1a609-6066-42a0-a450-b0e70365aa9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6jf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6grfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.959602 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-cwsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e5684ab-0b94-4eef-af30-0c6c4ab528af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd8d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cwsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.975200 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.975254 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.975263 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.975276 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.975285 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.978441 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sdh5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.983546 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.986058 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.986085 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.986094 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.986107 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.986118 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.991395 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c99ae5-3448-4d7b-9141-781a3683de72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a04fa4d6993fe4e83a7bd2d552bb16d9dc8e33e89a789170b8fec180c65b793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a4d65f95ca5f832e6ac85de46fd3d474221c3263ab1c2eba3123e4742fc5287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da17ce8ac77c94210b966d6bc7b376e82189a903321c9800662d2c12abf965d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://effeb6003c974dc677094f47337b7bf2ba1dad9209e7f72af53b5ac7d069f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: E0126 00:10:11.993877 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.996061 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.996092 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.996131 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.996145 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5124]: I0126 00:10:11.996155 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.001430 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.004325 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.011413 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.011453 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.011465 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.011480 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.011489 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:12Z","lastTransitionTime":"2026-01-26T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.012068 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.019792 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.027180 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.027235 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.027246 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.027259 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.027269 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:12Z","lastTransitionTime":"2026-01-26T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.028065 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c96023c-09ac-49d0-b8bd-09f46f6d9655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-87scd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.035787 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.035905 5124 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.035967 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95fa0656-150a-4d93-a324-77a1306d91f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xt6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xt6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kmxcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.037245 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.037278 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.037288 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.037303 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.037316 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:12Z","lastTransitionTime":"2026-01-26T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.045200 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e4f768-137c-4c5c-878d-3852f54a6df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4382e3a3d54a3ceaf116dd5c6f7f458833943f7e948dc335bc038b3267463d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.052961 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.053064 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.053138 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.053215 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.053098 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.053364 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.053429 5124 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.053517 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:14.053502287 +0000 UTC m=+91.962421636 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.053140 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.053708 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.053721 5124 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.053753 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:14.053744183 +0000 UTC m=+91.962663532 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.053181 5124 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.053784 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:14.053778624 +0000 UTC m=+91.962697973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.053283 5124 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.053873 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:14.053850106 +0000 UTC m=+91.962769505 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.055064 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fa44516-2654-456d-893a-96341101557c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:01.118231 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:01.118416 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:01.121827 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107044536/tls.crt::/tmp/serving-cert-107044536/tls.key\\\\\\\"\\\\nI0126 00:10:01.529054 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:01.532621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:01.532658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:01.532703 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:01.532730 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:01.539927 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:01.539960 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:01.539981 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.539994 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.540005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:01.540013 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:01.540020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:01.540025 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:01.543048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.063911 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.070885 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sctbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sctbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.087208 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df9bb628-c0ff-4254-8f43-66c1d289b343\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://94702470d0dd24faac34520e06613c5897b79dde56d2897fabe3a52050980120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4647208e6c84a5a6977c9b5f4a59a5a2ec2b2957cb47ea0707851ab13bef96ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b64f819f442260b8aaac091fe6a09b99175d27d2ec944332d5977a5ca5af58f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d6d8389d6d15bd747b8ef74dc30f010429f962e34fe75b84935720929eab5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd70d62ee532dd5a0aa8e04beb99f336153670709121aa892e5fa90aca675a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.095610 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.103602 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.112659 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8660dad9-43c8-4c00-872a-e00a6baab0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx9l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx9l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-mpdlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.121955 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-smnb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f826f136-a910-4120-aa62-a08e427590c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbqfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-smnb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.131927 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e553c5-fff7-48ff-8b44-c86ab881b7bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8da7cf7985b3076f734741cd805f8a4f273d7620fc89a9f9d02fa906489960c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://56cb10ea63f74e8cb16b42dc94949b4ddf748e8fdf73c942fb868db9001364e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27a7b88896a26f50315b57e5bff7d5ec0511f09f0acb636c09e3c76caf1c686b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.139228 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.139263 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.139272 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.139285 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.139296 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:12Z","lastTransitionTime":"2026-01-26T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.153908 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.154063 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:14.154045063 +0000 UTC m=+92.062964412 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.241199 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.241234 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.241244 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.241258 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.241267 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:12Z","lastTransitionTime":"2026-01-26T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.254949 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs\") pod \"network-metrics-daemon-sctbw\" (UID: \"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\") " pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.255078 5124 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.255124 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs podName:08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b nodeName:}" failed. No retries permitted until 2026-01-26 00:10:14.255111612 +0000 UTC m=+92.164030961 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs") pod "network-metrics-daemon-sctbw" (UID: "08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.343620 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.343961 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.344192 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.344370 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.344545 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:12Z","lastTransitionTime":"2026-01-26T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.365346 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.365467 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.365481 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.365636 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.365692 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.365785 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.365827 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:12 crc kubenswrapper[5124]: E0126 00:10:12.366062 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.372236 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.373695 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.376108 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e4f768-137c-4c5c-878d-3852f54a6df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4382e3a3d54a3ceaf116dd5c6f7f458833943f7e948dc335bc038b3267463d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.376482 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.378759 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.382436 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.385958 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.389155 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.391832 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.392554 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.393828 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.394788 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.395400 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fa44516-2654-456d-893a-96341101557c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:01.118231 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:01.118416 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:01.121827 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107044536/tls.crt::/tmp/serving-cert-107044536/tls.key\\\\\\\"\\\\nI0126 00:10:01.529054 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:01.532621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:01.532658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:01.532703 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:01.532730 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:01.539927 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:01.539960 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:01.539981 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.539994 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.540005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:01.540013 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:01.540020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:01.540025 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:01.543048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.396241 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.396915 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.398290 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.398773 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.399435 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.400507 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.401540 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.402802 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.404189 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.405134 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.407022 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.408051 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.408993 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.410185 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.410485 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.411018 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.412177 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.412858 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.414899 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.415762 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.416815 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.418071 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.419238 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.420619 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.421426 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.422282 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.422241 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sctbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sctbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.422983 5124 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.423099 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.426096 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.426997 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.428825 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.431029 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.433556 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.436053 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.437549 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.439507 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.441269 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.444097 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.445732 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.446850 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.446917 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.446936 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.446963 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.446984 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:12Z","lastTransitionTime":"2026-01-26T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.448336 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.450210 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.453247 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.454524 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df9bb628-c0ff-4254-8f43-66c1d289b343\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://94702470d0dd24faac34520e06613c5897b79dde56d2897fabe3a52050980120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4647208e6c84a5a6977c9b5f4a59a5a2ec2b2957cb47ea0707851ab13bef96ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b64f819f442260b8aaac091fe6a09b99175d27d2ec944332d5977a5ca5af58f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d6d8389d6d15bd747b8ef74dc30f010429f962e34fe75b84935720929eab5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd70d62ee532dd5a0aa8e04beb99f336153670709121aa892e5fa90aca675a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.455430 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.459623 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.462711 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.464175 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.466977 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.468702 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.472672 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.485160 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.500801 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8660dad9-43c8-4c00-872a-e00a6baab0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx9l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx9l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-mpdlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.510812 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-smnb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f826f136-a910-4120-aa62-a08e427590c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbqfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-smnb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.526369 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e553c5-fff7-48ff-8b44-c86ab881b7bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8da7cf7985b3076f734741cd805f8a4f273d7620fc89a9f9d02fa906489960c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://56cb10ea63f74e8cb16b42dc94949b4ddf748e8fdf73c942fb868db9001364e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27a7b88896a26f50315b57e5bff7d5ec0511f09f0acb636c09e3c76caf1c686b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.541117 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.548942 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6grfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a1a609-6066-42a0-a450-b0e70365aa9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6jf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6grfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.549741 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.549795 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.549812 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.549834 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.549852 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:12Z","lastTransitionTime":"2026-01-26T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.557138 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-cwsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e5684ab-0b94-4eef-af30-0c6c4ab528af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd8d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cwsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.574150 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sdh5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.586378 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c99ae5-3448-4d7b-9141-781a3683de72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a04fa4d6993fe4e83a7bd2d552bb16d9dc8e33e89a789170b8fec180c65b793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a4d65f95ca5f832e6ac85de46fd3d474221c3263ab1c2eba3123e4742fc5287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da17ce8ac77c94210b966d6bc7b376e82189a903321c9800662d2c12abf965d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://effeb6003c974dc677094f47337b7bf2ba1dad9209e7f72af53b5ac7d069f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.596671 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.605142 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.647724 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c96023c-09ac-49d0-b8bd-09f46f6d9655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-87scd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.652057 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.652127 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.652150 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.652176 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.652193 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:12Z","lastTransitionTime":"2026-01-26T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.683758 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95fa0656-150a-4d93-a324-77a1306d91f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xt6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xt6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kmxcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.753633 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.753698 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.753723 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.753757 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.753789 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:12Z","lastTransitionTime":"2026-01-26T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.855832 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.855915 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.855927 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.855947 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.855960 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:12Z","lastTransitionTime":"2026-01-26T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.959286 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.959353 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.959379 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.959409 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:12 crc kubenswrapper[5124]: I0126 00:10:12.959431 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:12Z","lastTransitionTime":"2026-01-26T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.061727 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.061776 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.061790 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.061853 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.061867 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:13Z","lastTransitionTime":"2026-01-26T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.163194 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.163238 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.163248 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.163263 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.163274 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:13Z","lastTransitionTime":"2026-01-26T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.265250 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.265296 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.265304 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.265317 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.265327 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:13Z","lastTransitionTime":"2026-01-26T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.366833 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.366887 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.366899 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.366912 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.366922 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:13Z","lastTransitionTime":"2026-01-26T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.469053 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.469087 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.469095 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.469110 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.469119 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:13Z","lastTransitionTime":"2026-01-26T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.570560 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.570622 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.570632 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.570646 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.570655 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:13Z","lastTransitionTime":"2026-01-26T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.672712 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.672765 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.672778 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.672794 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.672824 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:13Z","lastTransitionTime":"2026-01-26T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.774978 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.775025 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.775037 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.775057 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.775068 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:13Z","lastTransitionTime":"2026-01-26T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.876786 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.876838 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.876850 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.876864 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.876875 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:13Z","lastTransitionTime":"2026-01-26T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.978872 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.978909 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.978918 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.978932 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:13 crc kubenswrapper[5124]: I0126 00:10:13.978942 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:13Z","lastTransitionTime":"2026-01-26T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.070758 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.070876 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.070898 5124 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.070917 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.070955 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.071008 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:18.070977539 +0000 UTC m=+95.979896938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.071124 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.071149 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.071166 5124 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.071209 5124 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.071236 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:18.071216825 +0000 UTC m=+95.980136204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.071268 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.071371 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.071286 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:18.071268196 +0000 UTC m=+95.980187545 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.071494 5124 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.071632 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:18.071576875 +0000 UTC m=+95.980496264 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.080670 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.080717 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.080730 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.080748 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.080760 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:14Z","lastTransitionTime":"2026-01-26T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.172211 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.172382 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:18.172362037 +0000 UTC m=+96.081281386 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.183012 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.183059 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.183072 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.183088 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.183099 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:14Z","lastTransitionTime":"2026-01-26T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.273567 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs\") pod \"network-metrics-daemon-sctbw\" (UID: \"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\") " pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.273770 5124 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.273826 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs podName:08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b nodeName:}" failed. No retries permitted until 2026-01-26 00:10:18.273812017 +0000 UTC m=+96.182731366 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs") pod "network-metrics-daemon-sctbw" (UID: "08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.285055 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.285107 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.285129 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.285147 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.285158 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:14Z","lastTransitionTime":"2026-01-26T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.365305 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.365330 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.365412 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.365461 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.365492 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.365305 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.365580 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:14 crc kubenswrapper[5124]: E0126 00:10:14.365726 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.386887 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.386933 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.386945 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.386964 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.386977 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:14Z","lastTransitionTime":"2026-01-26T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.488473 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.488536 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.488547 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.488561 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.488570 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:14Z","lastTransitionTime":"2026-01-26T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.590887 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.590933 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.590941 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.590957 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.590966 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:14Z","lastTransitionTime":"2026-01-26T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.693086 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.693139 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.693155 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.693171 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.693183 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:14Z","lastTransitionTime":"2026-01-26T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.794955 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.794986 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.794995 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.795009 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.795018 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:14Z","lastTransitionTime":"2026-01-26T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.897074 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.897120 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.897131 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.897146 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.897156 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:14Z","lastTransitionTime":"2026-01-26T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.999226 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.999293 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.999311 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.999333 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:14 crc kubenswrapper[5124]: I0126 00:10:14.999351 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:14Z","lastTransitionTime":"2026-01-26T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.100996 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.101044 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.101058 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.101076 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.101087 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:15Z","lastTransitionTime":"2026-01-26T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.202912 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.202966 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.202976 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.202990 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.203000 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:15Z","lastTransitionTime":"2026-01-26T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.305220 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.305257 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.305266 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.305278 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.305287 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:15Z","lastTransitionTime":"2026-01-26T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.407082 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.407133 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.407144 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.407162 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.407176 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:15Z","lastTransitionTime":"2026-01-26T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.508890 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.508934 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.508946 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.508962 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.508973 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:15Z","lastTransitionTime":"2026-01-26T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.610909 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.610994 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.611005 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.611020 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.611031 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:15Z","lastTransitionTime":"2026-01-26T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.713357 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.713454 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.713482 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.713515 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.713565 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:15Z","lastTransitionTime":"2026-01-26T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.816218 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.816280 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.816295 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.816312 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.816321 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:15Z","lastTransitionTime":"2026-01-26T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.919257 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.919335 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.919355 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.919382 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:15 crc kubenswrapper[5124]: I0126 00:10:15.919401 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:15Z","lastTransitionTime":"2026-01-26T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.020810 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.020845 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.020853 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.020869 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.020878 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:16Z","lastTransitionTime":"2026-01-26T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.122522 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.122569 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.122582 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.122619 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.122631 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:16Z","lastTransitionTime":"2026-01-26T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.224943 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.225013 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.225035 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.225060 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.225078 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:16Z","lastTransitionTime":"2026-01-26T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.327300 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.327389 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.327415 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.327443 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.327466 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:16Z","lastTransitionTime":"2026-01-26T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.365822 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.365875 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:16 crc kubenswrapper[5124]: E0126 00:10:16.365958 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:16 crc kubenswrapper[5124]: E0126 00:10:16.366050 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.366113 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:16 crc kubenswrapper[5124]: E0126 00:10:16.366321 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.366406 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:16 crc kubenswrapper[5124]: E0126 00:10:16.366479 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.429962 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.430043 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.430058 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.430078 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.430100 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:16Z","lastTransitionTime":"2026-01-26T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.532410 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.532470 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.532490 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.532513 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.532531 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:16Z","lastTransitionTime":"2026-01-26T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.634826 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.634868 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.634880 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.634895 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.634905 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:16Z","lastTransitionTime":"2026-01-26T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.737017 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.737056 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.737065 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.737079 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.737088 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:16Z","lastTransitionTime":"2026-01-26T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.839798 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.839835 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.839846 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.839860 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.839869 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:16Z","lastTransitionTime":"2026-01-26T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.942008 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.942059 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.942070 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.942084 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:16 crc kubenswrapper[5124]: I0126 00:10:16.942094 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:16Z","lastTransitionTime":"2026-01-26T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.043829 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.043877 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.043887 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.043902 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.043911 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:17Z","lastTransitionTime":"2026-01-26T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.146093 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.146132 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.146184 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.146199 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.146211 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:17Z","lastTransitionTime":"2026-01-26T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.247806 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.247855 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.247868 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.247883 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.247897 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:17Z","lastTransitionTime":"2026-01-26T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.350480 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.350557 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.350579 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.350637 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.350657 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:17Z","lastTransitionTime":"2026-01-26T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.452663 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.452737 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.452762 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.452796 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.452819 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:17Z","lastTransitionTime":"2026-01-26T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.555936 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.556014 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.556034 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.556061 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.556080 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:17Z","lastTransitionTime":"2026-01-26T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.658736 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.658785 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.658799 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.658817 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.658826 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:17Z","lastTransitionTime":"2026-01-26T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.761429 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.761474 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.761485 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.761506 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.761523 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:17Z","lastTransitionTime":"2026-01-26T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.864407 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.864491 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.864513 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.864543 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.864566 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:17Z","lastTransitionTime":"2026-01-26T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.967450 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.967526 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.967549 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.967578 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:17 crc kubenswrapper[5124]: I0126 00:10:17.967634 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:17Z","lastTransitionTime":"2026-01-26T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.070305 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.070385 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.070411 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.070443 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.070466 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:18Z","lastTransitionTime":"2026-01-26T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.116279 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.116362 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.116426 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.116480 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.116540 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.116617 5124 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.116692 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:26.116669008 +0000 UTC m=+104.025588377 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.116628 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.116704 5124 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.116719 5124 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.116550 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.116774 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.116782 5124 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.116783 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:26.1167586 +0000 UTC m=+104.025677989 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.116806 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:26.116795461 +0000 UTC m=+104.025714810 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.116818 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:26.116812981 +0000 UTC m=+104.025732320 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.172724 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.172768 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.172779 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.172793 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.172803 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:18Z","lastTransitionTime":"2026-01-26T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.217485 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.217742 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:26.217719787 +0000 UTC m=+104.126639146 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.275462 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.275506 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.275516 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.275531 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.275542 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:18Z","lastTransitionTime":"2026-01-26T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.318540 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs\") pod \"network-metrics-daemon-sctbw\" (UID: \"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\") " pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.318783 5124 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.318895 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs podName:08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b nodeName:}" failed. No retries permitted until 2026-01-26 00:10:26.318862548 +0000 UTC m=+104.227781907 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs") pod "network-metrics-daemon-sctbw" (UID: "08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.365417 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.365455 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.365542 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.365484 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.365470 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.365627 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.365595 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:18 crc kubenswrapper[5124]: E0126 00:10:18.365748 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.377734 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.377765 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.377780 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.377794 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.377806 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:18Z","lastTransitionTime":"2026-01-26T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.479806 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.479881 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.479898 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.479915 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.479927 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:18Z","lastTransitionTime":"2026-01-26T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.582502 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.582542 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.582552 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.582568 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.582578 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:18Z","lastTransitionTime":"2026-01-26T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.684814 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.684859 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.684872 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.684888 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.684900 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:18Z","lastTransitionTime":"2026-01-26T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.787440 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.787796 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.787808 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.787824 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.787839 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:18Z","lastTransitionTime":"2026-01-26T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.890543 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.890653 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.890681 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.890713 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.890737 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:18Z","lastTransitionTime":"2026-01-26T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.993350 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.993435 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.993464 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.993494 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:18 crc kubenswrapper[5124]: I0126 00:10:18.993515 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:18Z","lastTransitionTime":"2026-01-26T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.095671 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.095723 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.095740 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.095761 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.095851 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:19Z","lastTransitionTime":"2026-01-26T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.197630 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.197676 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.197719 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.197737 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.197750 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:19Z","lastTransitionTime":"2026-01-26T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.300326 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.300390 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.300411 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.300435 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.300455 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:19Z","lastTransitionTime":"2026-01-26T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.403252 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.403351 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.403379 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.403408 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.403428 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:19Z","lastTransitionTime":"2026-01-26T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.505424 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.505471 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.505482 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.505495 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.505504 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:19Z","lastTransitionTime":"2026-01-26T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.607487 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.607524 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.607533 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.607545 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.607554 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:19Z","lastTransitionTime":"2026-01-26T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.709757 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.709804 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.709816 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.709832 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.709843 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:19Z","lastTransitionTime":"2026-01-26T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.811865 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.811919 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.811936 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.811956 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.811971 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:19Z","lastTransitionTime":"2026-01-26T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.913952 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.914017 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.914035 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.914062 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:19 crc kubenswrapper[5124]: I0126 00:10:19.914082 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:19Z","lastTransitionTime":"2026-01-26T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.016305 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.016368 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.016387 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.016411 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.016429 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:20Z","lastTransitionTime":"2026-01-26T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.118798 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.118869 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.118890 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.118914 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.118931 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:20Z","lastTransitionTime":"2026-01-26T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.221464 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.221522 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.221540 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.221565 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.221621 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:20Z","lastTransitionTime":"2026-01-26T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.324671 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.324726 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.324739 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.324756 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.324768 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:20Z","lastTransitionTime":"2026-01-26T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.365565 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.365624 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:20 crc kubenswrapper[5124]: E0126 00:10:20.365713 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.365624 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:20 crc kubenswrapper[5124]: E0126 00:10:20.365810 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:20 crc kubenswrapper[5124]: E0126 00:10:20.366027 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.366042 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:20 crc kubenswrapper[5124]: E0126 00:10:20.366310 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.426537 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.426580 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.426615 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.426636 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.426648 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:20Z","lastTransitionTime":"2026-01-26T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.528674 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.528747 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.528756 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.528779 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.528789 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:20Z","lastTransitionTime":"2026-01-26T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.631001 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.631076 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.631103 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.631134 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.631158 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:20Z","lastTransitionTime":"2026-01-26T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.733673 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.733737 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.733757 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.733784 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.733805 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:20Z","lastTransitionTime":"2026-01-26T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.835905 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.835979 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.836001 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.836026 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.836047 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:20Z","lastTransitionTime":"2026-01-26T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.938179 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.938235 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.938253 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.938278 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:20 crc kubenswrapper[5124]: I0126 00:10:20.938295 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:20Z","lastTransitionTime":"2026-01-26T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.040189 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.040241 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.040254 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.040271 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.040286 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.143194 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.143280 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.143309 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.143338 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.143360 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.246319 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.246373 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.246392 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.246412 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.246427 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.349024 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.349071 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.349081 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.349100 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.349113 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.365243 5124 scope.go:117] "RemoveContainer" containerID="6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32" Jan 26 00:10:21 crc kubenswrapper[5124]: E0126 00:10:21.365496 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.451047 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.451112 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.451125 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.451140 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.451150 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.553360 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.553406 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.553415 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.553429 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.553438 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.655831 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.655875 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.655884 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.655900 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.655910 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.758175 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.758222 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.758232 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.758247 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.758261 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.860548 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.860620 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.860633 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.860651 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.860662 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.963201 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.963280 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.963305 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.963331 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5124]: I0126 00:10:21.963350 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.065073 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.065147 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.065162 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.065189 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.065202 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.167162 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.167204 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.167219 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.167235 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.167246 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.268713 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.268784 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.268811 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.268845 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.268870 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.326837 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.326905 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.326926 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.326953 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.326972 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: E0126 00:10:22.345108 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.348311 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.348362 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.348372 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.348385 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.348411 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: E0126 00:10:22.363256 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.365063 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:22 crc kubenswrapper[5124]: E0126 00:10:22.365177 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.365459 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:22 crc kubenswrapper[5124]: E0126 00:10:22.365519 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.365648 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.365658 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:22 crc kubenswrapper[5124]: E0126 00:10:22.365730 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:22 crc kubenswrapper[5124]: E0126 00:10:22.365783 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.367332 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.367358 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.367367 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.367382 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.367391 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: E0126 00:10:22.378055 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.380255 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sdh5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.382104 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.382788 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.382829 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.382866 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.382893 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.393025 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c99ae5-3448-4d7b-9141-781a3683de72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a04fa4d6993fe4e83a7bd2d552bb16d9dc8e33e89a789170b8fec180c65b793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a4d65f95ca5f832e6ac85de46fd3d474221c3263ab1c2eba3123e4742fc5287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da17ce8ac77c94210b966d6bc7b376e82189a903321c9800662d2c12abf965d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://effeb6003c974dc677094f47337b7bf2ba1dad9209e7f72af53b5ac7d069f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: E0126 00:10:22.393049 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.396872 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.396924 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.396941 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.396964 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.396980 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.403710 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: E0126 00:10:22.406111 5124 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"24413647-b67c-4e2e-bb9e-ac26cf92e744\\\",\\\"systemUUID\\\":\\\"c7fd9a8b-5491-44c4-bd96-9fa0fdb97ad8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: E0126 00:10:22.406274 5124 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.407837 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.407874 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.407887 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.407904 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.407918 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.413217 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.423835 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c96023c-09ac-49d0-b8bd-09f46f6d9655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-87scd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.432021 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95fa0656-150a-4d93-a324-77a1306d91f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xt6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xt6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kmxcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.442519 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e4f768-137c-4c5c-878d-3852f54a6df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4382e3a3d54a3ceaf116dd5c6f7f458833943f7e948dc335bc038b3267463d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.455287 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fa44516-2654-456d-893a-96341101557c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:01.118231 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:01.118416 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:01.121827 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107044536/tls.crt::/tmp/serving-cert-107044536/tls.key\\\\\\\"\\\\nI0126 00:10:01.529054 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:01.532621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:01.532658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:01.532703 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:01.532730 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:01.539927 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:01.539960 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:01.539981 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.539994 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.540005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:01.540013 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:01.540020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:01.540025 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:01.543048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.465606 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.474308 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sctbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sctbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.493670 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df9bb628-c0ff-4254-8f43-66c1d289b343\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://94702470d0dd24faac34520e06613c5897b79dde56d2897fabe3a52050980120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4647208e6c84a5a6977c9b5f4a59a5a2ec2b2957cb47ea0707851ab13bef96ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b64f819f442260b8aaac091fe6a09b99175d27d2ec944332d5977a5ca5af58f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d6d8389d6d15bd747b8ef74dc30f010429f962e34fe75b84935720929eab5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd70d62ee532dd5a0aa8e04beb99f336153670709121aa892e5fa90aca675a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.503545 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.510316 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.510355 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.510368 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.510383 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.510394 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.513454 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.522699 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8660dad9-43c8-4c00-872a-e00a6baab0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx9l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx9l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-mpdlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.534414 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-smnb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f826f136-a910-4120-aa62-a08e427590c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbqfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-smnb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.545104 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e553c5-fff7-48ff-8b44-c86ab881b7bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8da7cf7985b3076f734741cd805f8a4f273d7620fc89a9f9d02fa906489960c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://56cb10ea63f74e8cb16b42dc94949b4ddf748e8fdf73c942fb868db9001364e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27a7b88896a26f50315b57e5bff7d5ec0511f09f0acb636c09e3c76caf1c686b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.554830 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.563273 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6grfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a1a609-6066-42a0-a450-b0e70365aa9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6jf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6grfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.573135 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-cwsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e5684ab-0b94-4eef-af30-0c6c4ab528af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd8d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cwsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.612619 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.612659 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.612672 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.612692 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.612705 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.715163 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.715239 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.715259 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.715281 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.715327 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.763449 5124 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.817886 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.817962 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.817972 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.817987 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.817997 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.920361 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.920553 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.920652 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.920726 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:22 crc kubenswrapper[5124]: I0126 00:10:22.920786 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:22Z","lastTransitionTime":"2026-01-26T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.022906 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.022954 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.022966 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.022985 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.022998 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:23Z","lastTransitionTime":"2026-01-26T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.125379 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.125435 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.125453 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.125475 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.125491 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:23Z","lastTransitionTime":"2026-01-26T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.227528 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.227608 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.227623 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.227646 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.227661 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:23Z","lastTransitionTime":"2026-01-26T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.330568 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.330971 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.331200 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.331358 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.331484 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:23Z","lastTransitionTime":"2026-01-26T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.434891 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.434942 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.434954 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.434970 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.434982 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:23Z","lastTransitionTime":"2026-01-26T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.536716 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.536772 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.536785 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.536806 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.536826 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:23Z","lastTransitionTime":"2026-01-26T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.639654 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.639704 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.639714 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.639731 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.639744 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:23Z","lastTransitionTime":"2026-01-26T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.741373 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.741668 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.741798 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.741889 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.741975 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:23Z","lastTransitionTime":"2026-01-26T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.844876 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.844962 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.844992 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.845096 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.845121 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:23Z","lastTransitionTime":"2026-01-26T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.948549 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.948712 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.948745 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.948781 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:23 crc kubenswrapper[5124]: I0126 00:10:23.948809 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:23Z","lastTransitionTime":"2026-01-26T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.051700 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.051740 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.051749 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.051762 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.051772 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:24Z","lastTransitionTime":"2026-01-26T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.153745 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.153828 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.153853 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.153888 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.153911 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:24Z","lastTransitionTime":"2026-01-26T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.256239 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.256317 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.256341 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.256372 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.256393 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:24Z","lastTransitionTime":"2026-01-26T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.359129 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.359215 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.359242 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.359290 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.359322 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:24Z","lastTransitionTime":"2026-01-26T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.365166 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.365303 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:24 crc kubenswrapper[5124]: E0126 00:10:24.365480 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.365639 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.365852 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:24 crc kubenswrapper[5124]: E0126 00:10:24.365848 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:24 crc kubenswrapper[5124]: E0126 00:10:24.365993 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:24 crc kubenswrapper[5124]: E0126 00:10:24.366183 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.462241 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.462699 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.462710 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.462726 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.462736 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:24Z","lastTransitionTime":"2026-01-26T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.565523 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.565573 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.565619 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.565638 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.565648 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:24Z","lastTransitionTime":"2026-01-26T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.667552 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.667843 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.667913 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.667976 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.668038 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:24Z","lastTransitionTime":"2026-01-26T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.730950 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerStarted","Data":"d83d6e9dbee8896d25299332774ac25503be88561fd1040886735c806d9b1d94"} Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.769790 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.769835 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.769847 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.769870 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.769881 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:24Z","lastTransitionTime":"2026-01-26T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.872561 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.872624 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.872641 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.872659 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.872670 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:24Z","lastTransitionTime":"2026-01-26T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.975047 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.975128 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.975154 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.975185 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:24 crc kubenswrapper[5124]: I0126 00:10:24.975210 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:24Z","lastTransitionTime":"2026-01-26T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.078063 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.078430 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.078440 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.078456 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.078467 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:25Z","lastTransitionTime":"2026-01-26T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.180533 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.180573 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.180604 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.180620 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.180632 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:25Z","lastTransitionTime":"2026-01-26T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.282409 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.282507 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.282526 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.282548 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.282569 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:25Z","lastTransitionTime":"2026-01-26T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.384764 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.384810 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.384823 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.384839 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.384851 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:25Z","lastTransitionTime":"2026-01-26T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.487931 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.488165 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.488174 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.488187 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.488196 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:25Z","lastTransitionTime":"2026-01-26T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.591084 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.591121 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.591130 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.591144 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.591153 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:25Z","lastTransitionTime":"2026-01-26T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.693278 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.693318 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.693327 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.693338 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.693347 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:25Z","lastTransitionTime":"2026-01-26T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.735225 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerStarted","Data":"019cc5f7caa17e5189240344e2885ae5f74c4867ed4a8dee585c2282ae83287e"} Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.746283 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-smnb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f826f136-a910-4120-aa62-a08e427590c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbqfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-smnb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.756332 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e553c5-fff7-48ff-8b44-c86ab881b7bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8da7cf7985b3076f734741cd805f8a4f273d7620fc89a9f9d02fa906489960c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://56cb10ea63f74e8cb16b42dc94949b4ddf748e8fdf73c942fb868db9001364e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27a7b88896a26f50315b57e5bff7d5ec0511f09f0acb636c09e3c76caf1c686b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbac19f70c66272a40bc7fe06106f95c04b995c67c127135d678b0ba9a78b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.765787 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.772450 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6grfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a1a609-6066-42a0-a450-b0e70365aa9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6jf9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6grfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.778726 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-cwsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e5684ab-0b94-4eef-af30-0c6c4ab528af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd8d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cwsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.791776 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sphjf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sdh5t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.794868 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.794907 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.794918 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.794932 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.794942 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:25Z","lastTransitionTime":"2026-01-26T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.801699 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0c99ae5-3448-4d7b-9141-781a3683de72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a04fa4d6993fe4e83a7bd2d552bb16d9dc8e33e89a789170b8fec180c65b793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a4d65f95ca5f832e6ac85de46fd3d474221c3263ab1c2eba3123e4742fc5287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da17ce8ac77c94210b966d6bc7b376e82189a903321c9800662d2c12abf965d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://effeb6003c974dc677094f47337b7bf2ba1dad9209e7f72af53b5ac7d069f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.810564 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.818783 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.828602 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c96023c-09ac-49d0-b8bd-09f46f6d9655\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nb6p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-87scd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.837970 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95fa0656-150a-4d93-a324-77a1306d91f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://019cc5f7caa17e5189240344e2885ae5f74c4867ed4a8dee585c2282ae83287e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xt6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d83d6e9dbee8896d25299332774ac25503be88561fd1040886735c806d9b1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xt6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kmxcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.845967 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e4f768-137c-4c5c-878d-3852f54a6df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4382e3a3d54a3ceaf116dd5c6f7f458833943f7e948dc335bc038b3267463d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0733ced83f8a595542a3a5e1b2358bdd6e9c9867d4d31b83aba01450710a1393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.856577 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fa44516-2654-456d-893a-96341101557c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:01.118231 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:01.118416 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:01.121827 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107044536/tls.crt::/tmp/serving-cert-107044536/tls.key\\\\\\\"\\\\nI0126 00:10:01.529054 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:01.532621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:01.532658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:01.532703 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:01.532730 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:01.539927 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:01.539960 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:01.539981 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.539994 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.540005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:01.540013 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:01.540020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:01.540025 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:01.543048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.864889 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.874313 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sctbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sctbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.889961 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df9bb628-c0ff-4254-8f43-66c1d289b343\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://94702470d0dd24faac34520e06613c5897b79dde56d2897fabe3a52050980120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4647208e6c84a5a6977c9b5f4a59a5a2ec2b2957cb47ea0707851ab13bef96ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b64f819f442260b8aaac091fe6a09b99175d27d2ec944332d5977a5ca5af58f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d6d8389d6d15bd747b8ef74dc30f010429f962e34fe75b84935720929eab5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cd70d62ee532dd5a0aa8e04beb99f336153670709121aa892e5fa90aca675a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d8ce299ce0a170138601002ffd93680b9c5360205e0cae4cfe90de54c659ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6205e86cd3c1859b05bf772087c7bf0fc9286354ae84a1027fbf60ebfbd62df5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74e60b8dabfb2b1fb5d7448547929a39ed771ac32c9c8ac05eda98c02da7625\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.896556 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.896638 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.896659 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.896681 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.896699 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:25Z","lastTransitionTime":"2026-01-26T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.899760 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.908662 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.919042 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8660dad9-43c8-4c00-872a-e00a6baab0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx9l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx9l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-mpdlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.998222 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.998273 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.998288 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.998304 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:25 crc kubenswrapper[5124]: I0126 00:10:25.998316 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:25Z","lastTransitionTime":"2026-01-26T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.100232 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.100266 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.100274 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.100287 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.100296 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.202078 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.202115 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.202123 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.202136 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.202145 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.213461 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.213489 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.213517 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.213555 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.213633 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.213651 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.213654 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.213663 5124 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.213666 5124 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.213672 5124 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.213673 5124 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.213711 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.213699643 +0000 UTC m=+120.122618992 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.213726 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.213719173 +0000 UTC m=+120.122638522 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.213736 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.213732064 +0000 UTC m=+120.122651413 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.213764 5124 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.213880 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.213849557 +0000 UTC m=+120.122768916 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.303666 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.303709 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.303720 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.303735 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.303746 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.314526 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.314694 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.31467293 +0000 UTC m=+120.223592289 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.364878 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.364894 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.365108 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.365168 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.365340 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.365525 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.365617 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.365674 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.406105 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.406176 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.406194 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.406219 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.406237 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.415738 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs\") pod \"network-metrics-daemon-sctbw\" (UID: \"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\") " pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.416112 5124 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:26 crc kubenswrapper[5124]: E0126 00:10:26.416205 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs podName:08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.416187291 +0000 UTC m=+120.325106640 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs") pod "network-metrics-daemon-sctbw" (UID: "08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.508263 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.508295 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.508304 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.508317 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.508327 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.609845 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.609877 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.609886 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.609900 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.609908 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.711402 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.711647 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.711712 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.711793 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.711852 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.740498 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cwsts" event={"ID":"9e5684ab-0b94-4eef-af30-0c6c4ab528af","Type":"ContainerStarted","Data":"1f24e94669f0005644e98c3fa113dba4a1710e7eb5b236ca4be3cf038325a20f"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.742419 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6grfh" event={"ID":"45a1a609-6066-42a0-a450-b0e70365aa9b","Type":"ContainerStarted","Data":"5368cab8d7f52e18a894724218d6a48866db393b10890a3154fbf3bd18336181"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.744436 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" event={"ID":"8660dad9-43c8-4c00-872a-e00a6baab0f7","Type":"ContainerStarted","Data":"b6f2454f5333ab911eebfd64bf0a3fabf18ab1bbf4c865a0ff147603146a0da7"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.744474 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" event={"ID":"8660dad9-43c8-4c00-872a-e00a6baab0f7","Type":"ContainerStarted","Data":"30d8c5e11238102663950d2ebd33f9bc42936ba7b859ad8cbb88cd6f37520d8b"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.745852 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"6ef64d18e8de64c939f7e04ff5f054e3e0b8a748381031063b27c9508cc08610"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.747210 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-smnb7" event={"ID":"f826f136-a910-4120-aa62-a08e427590c0","Type":"ContainerStarted","Data":"0af4c7adce9ca2591a5e45ed1b33cb8402b5e759836f9fbb681395b39fc0b6d8"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.748494 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" event={"ID":"5c96023c-09ac-49d0-b8bd-09f46f6d9655","Type":"ContainerStarted","Data":"26ea57a93d60960e35922bdbc55d0b23283952108e3035d344a5cc663ef984e1"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.749759 5124 generic.go:358] "Generic (PLEG): container finished" podID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerID="30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f" exitCode=0 Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.749803 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerDied","Data":"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.751343 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"76e5697a4abcc90d168889b662ee4316b9d94e24a716e952db8faeae9cbeb154"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.752874 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"26399a73adc7b2a71bc07608322a72f1b7af3f82d0814419602689a2f654f54a"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.752905 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"bf57c6f27cb06c55933db21f920f0932320f8ee466c9159a48c8f592b8e27a77"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.754310 5124 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fa44516-2654-456d-893a-96341101557c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:01Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:01.118231 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:01.118416 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:01.121827 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107044536/tls.crt::/tmp/serving-cert-107044536/tls.key\\\\\\\"\\\\nI0126 00:10:01.529054 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:01.532621 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:01.532658 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:01.532703 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:01.532730 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:01.539927 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:01.539960 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:01.539981 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.539994 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:01.540005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:01.540013 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:01.540020 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:01.540025 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:01.543048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.811849 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.811832392 podStartE2EDuration="16.811832392s" podCreationTimestamp="2026-01-26 00:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:26.811281187 +0000 UTC m=+104.720200556" watchObservedRunningTime="2026-01-26 00:10:26.811832392 +0000 UTC m=+104.720751741" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.813215 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.813250 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.813262 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.813274 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.813284 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.914913 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.914955 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.914966 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.914982 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.914992 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.915164 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=17.915145751 podStartE2EDuration="17.915145751s" podCreationTimestamp="2026-01-26 00:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:26.914648138 +0000 UTC m=+104.823567497" watchObservedRunningTime="2026-01-26 00:10:26.915145751 +0000 UTC m=+104.824065100" Jan 26 00:10:26 crc kubenswrapper[5124]: I0126 00:10:26.967493 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-cwsts" podStartSLOduration=86.967473548 podStartE2EDuration="1m26.967473548s" podCreationTimestamp="2026-01-26 00:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:26.949818061 +0000 UTC m=+104.858737410" watchObservedRunningTime="2026-01-26 00:10:26.967473548 +0000 UTC m=+104.876392887" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.000474 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=18.000450833 podStartE2EDuration="18.000450833s" podCreationTimestamp="2026-01-26 00:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:26.987256843 +0000 UTC m=+104.896176192" watchObservedRunningTime="2026-01-26 00:10:27.000450833 +0000 UTC m=+104.909370182" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.019211 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.019251 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.019261 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.019276 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.019286 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.040085 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podStartSLOduration=86.040069953 podStartE2EDuration="1m26.040069953s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:27.039473188 +0000 UTC m=+104.948392527" watchObservedRunningTime="2026-01-26 00:10:27.040069953 +0000 UTC m=+104.948989302" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.075610 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.075574185 podStartE2EDuration="17.075574185s" podCreationTimestamp="2026-01-26 00:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:27.054452975 +0000 UTC m=+104.963372334" watchObservedRunningTime="2026-01-26 00:10:27.075574185 +0000 UTC m=+104.984493534" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.121168 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.121208 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.121218 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.121230 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.121239 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.155284 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" podStartSLOduration=86.155268548 podStartE2EDuration="1m26.155268548s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:27.154817136 +0000 UTC m=+105.063736495" watchObservedRunningTime="2026-01-26 00:10:27.155268548 +0000 UTC m=+105.064187897" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.185083 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-smnb7" podStartSLOduration=86.185061778 podStartE2EDuration="1m26.185061778s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:27.171294143 +0000 UTC m=+105.080213492" watchObservedRunningTime="2026-01-26 00:10:27.185061778 +0000 UTC m=+105.093981127" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.198414 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-6grfh" podStartSLOduration=87.198394792 podStartE2EDuration="1m27.198394792s" podCreationTimestamp="2026-01-26 00:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:27.198261388 +0000 UTC m=+105.107180737" watchObservedRunningTime="2026-01-26 00:10:27.198394792 +0000 UTC m=+105.107314131" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.224162 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.224206 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.224219 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.224233 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.224242 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.325548 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.325611 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.325626 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.325642 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.325653 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.427739 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.427779 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.427790 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.427805 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.427814 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.529301 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.529349 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.529361 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.529382 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.529394 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.631485 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.631529 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.631540 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.631559 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.631570 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.733688 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.733722 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.733732 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.733744 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.733753 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.759440 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerStarted","Data":"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.759484 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerStarted","Data":"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.759495 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerStarted","Data":"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.759504 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerStarted","Data":"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.759514 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerStarted","Data":"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.759522 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerStarted","Data":"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.761521 5124 generic.go:358] "Generic (PLEG): container finished" podID="5c96023c-09ac-49d0-b8bd-09f46f6d9655" containerID="26ea57a93d60960e35922bdbc55d0b23283952108e3035d344a5cc663ef984e1" exitCode=0 Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.761563 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" event={"ID":"5c96023c-09ac-49d0-b8bd-09f46f6d9655","Type":"ContainerDied","Data":"26ea57a93d60960e35922bdbc55d0b23283952108e3035d344a5cc663ef984e1"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.836935 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.836980 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.836993 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.837008 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.837022 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.939271 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.939310 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.939338 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.939355 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5124]: I0126 00:10:27.939366 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.041257 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.041301 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.041312 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.041326 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.041336 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.143139 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.143175 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.143184 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.143198 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.143207 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.245832 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.245896 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.245917 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.245942 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.245959 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.348268 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.348311 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.348324 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.348341 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.348354 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.364690 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:28 crc kubenswrapper[5124]: E0126 00:10:28.364837 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.365251 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:28 crc kubenswrapper[5124]: E0126 00:10:28.365357 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.365397 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.365403 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:28 crc kubenswrapper[5124]: E0126 00:10:28.365467 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:28 crc kubenswrapper[5124]: E0126 00:10:28.365639 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.450818 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.450869 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.450884 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.450902 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.450918 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.554144 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.554496 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.554793 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.554984 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.555215 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.657355 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.657411 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.657426 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.657445 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.657457 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.759896 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.759951 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.759964 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.759984 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.759996 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.767343 5124 generic.go:358] "Generic (PLEG): container finished" podID="5c96023c-09ac-49d0-b8bd-09f46f6d9655" containerID="6f43d62ca42305380cc0126bb2feae165f3ec50919f39465e69e5fe73a463eb3" exitCode=0 Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.767422 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" event={"ID":"5c96023c-09ac-49d0-b8bd-09f46f6d9655","Type":"ContainerDied","Data":"6f43d62ca42305380cc0126bb2feae165f3ec50919f39465e69e5fe73a463eb3"} Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.862233 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.862280 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.862290 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.862305 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.862314 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.964425 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.964495 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.964516 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.964565 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5124]: I0126 00:10:28.964578 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.066065 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.066103 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.066112 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.066124 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.066133 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.168228 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.168474 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.168574 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.168694 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.168776 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.270869 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.270904 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.270912 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.270925 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.270934 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.371731 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.371757 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.371765 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.371776 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.371784 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.473431 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.473716 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.473731 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.473747 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.473757 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.576072 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.576332 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.576391 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.576459 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.576513 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.679370 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.679622 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.679710 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.679811 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.679928 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.775116 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerStarted","Data":"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35"} Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.777675 5124 generic.go:358] "Generic (PLEG): container finished" podID="5c96023c-09ac-49d0-b8bd-09f46f6d9655" containerID="e4d118736a8c2b00e340e0ae6aa00750f43580db687e9113de9ae72fe85b5f10" exitCode=0 Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.777735 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" event={"ID":"5c96023c-09ac-49d0-b8bd-09f46f6d9655","Type":"ContainerDied","Data":"e4d118736a8c2b00e340e0ae6aa00750f43580db687e9113de9ae72fe85b5f10"} Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.781863 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.781907 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.781924 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.781945 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.781965 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.884030 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.884072 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.884081 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.884094 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.884106 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.985686 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.985718 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.985734 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.985751 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5124]: I0126 00:10:29.985761 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.087762 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.087804 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.087814 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.087829 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.087839 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.189879 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.189932 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.189944 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.189961 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.189972 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.291842 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.291881 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.291904 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.291918 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.291928 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.365249 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:30 crc kubenswrapper[5124]: E0126 00:10:30.365420 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.365437 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.365545 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:30 crc kubenswrapper[5124]: E0126 00:10:30.365720 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:30 crc kubenswrapper[5124]: E0126 00:10:30.365857 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.365911 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:30 crc kubenswrapper[5124]: E0126 00:10:30.366014 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.393610 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.393681 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.393697 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.393719 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.393736 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.495783 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.495861 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.495881 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.495908 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.495927 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.598091 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.598149 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.598165 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.598184 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.598197 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.700456 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.700526 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.700545 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.700570 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.700620 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.784870 5124 generic.go:358] "Generic (PLEG): container finished" podID="5c96023c-09ac-49d0-b8bd-09f46f6d9655" containerID="ae17757f5b6c6c39694ac0603dc92abfe652500f8e9a517e2f487f26bb6517d2" exitCode=0 Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.784951 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" event={"ID":"5c96023c-09ac-49d0-b8bd-09f46f6d9655","Type":"ContainerDied","Data":"ae17757f5b6c6c39694ac0603dc92abfe652500f8e9a517e2f487f26bb6517d2"} Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.803327 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.803394 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.803413 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.803439 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.803452 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.905122 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.905171 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.905183 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.905199 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5124]: I0126 00:10:30.905211 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.006902 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.006953 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.006971 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.006994 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.007012 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.109646 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.109706 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.109720 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.109741 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.109753 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.211680 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.211723 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.211732 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.211747 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.211758 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.313805 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.313887 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.313915 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.313948 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.313972 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.416631 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.416683 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.416900 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.416918 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.416930 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.519253 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.519297 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.519306 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.519321 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.519332 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.621657 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.622243 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.622265 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.622289 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.622308 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.724736 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.724780 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.724791 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.724805 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.724815 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.793181 5124 generic.go:358] "Generic (PLEG): container finished" podID="5c96023c-09ac-49d0-b8bd-09f46f6d9655" containerID="6586f31afddac2a8a2c4c8c4c68ffa6d0b8655b7f10eb0f4c195f13c37077c81" exitCode=0 Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.793273 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" event={"ID":"5c96023c-09ac-49d0-b8bd-09f46f6d9655","Type":"ContainerDied","Data":"6586f31afddac2a8a2c4c8c4c68ffa6d0b8655b7f10eb0f4c195f13c37077c81"} Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.802176 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerStarted","Data":"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768"} Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.802606 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.802643 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.802653 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.834838 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.835267 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.835319 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.835333 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.835357 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.835371 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.839300 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.878988 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" podStartSLOduration=90.878972232 podStartE2EDuration="1m30.878972232s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:31.848832704 +0000 UTC m=+109.757752093" watchObservedRunningTime="2026-01-26 00:10:31.878972232 +0000 UTC m=+109.787891581" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.938722 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.938766 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.938779 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.938795 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5124]: I0126 00:10:31.938808 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.040421 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.040470 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.040486 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.040505 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.040517 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.142813 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.142891 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.142916 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.142947 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.142972 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.245910 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.245982 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.246002 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.246030 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.246051 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.348180 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.348250 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.348271 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.348297 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.348317 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.367516 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:32 crc kubenswrapper[5124]: E0126 00:10:32.367677 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.368045 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:32 crc kubenswrapper[5124]: E0126 00:10:32.368131 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.368180 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:32 crc kubenswrapper[5124]: E0126 00:10:32.368255 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.368297 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:32 crc kubenswrapper[5124]: E0126 00:10:32.368366 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.415628 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.415677 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.415698 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.415715 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.415726 5124 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.455780 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq"] Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.459117 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.461306 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.461322 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.461805 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.461835 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.482645 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dbe4622f-e299-407d-b297-f6284d911f4e-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.482704 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dbe4622f-e299-407d-b297-f6284d911f4e-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.482735 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dbe4622f-e299-407d-b297-f6284d911f4e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.482752 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbe4622f-e299-407d-b297-f6284d911f4e-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.482766 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe4622f-e299-407d-b297-f6284d911f4e-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.584073 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dbe4622f-e299-407d-b297-f6284d911f4e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.584113 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbe4622f-e299-407d-b297-f6284d911f4e-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.584135 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe4622f-e299-407d-b297-f6284d911f4e-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.584184 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dbe4622f-e299-407d-b297-f6284d911f4e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.584196 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dbe4622f-e299-407d-b297-f6284d911f4e-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.584360 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dbe4622f-e299-407d-b297-f6284d911f4e-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.584404 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dbe4622f-e299-407d-b297-f6284d911f4e-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.585218 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dbe4622f-e299-407d-b297-f6284d911f4e-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.592390 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbe4622f-e299-407d-b297-f6284d911f4e-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.601418 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbe4622f-e299-407d-b297-f6284d911f4e-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-dgmrq\" (UID: \"dbe4622f-e299-407d-b297-f6284d911f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.809179 5124 generic.go:358] "Generic (PLEG): container finished" podID="5c96023c-09ac-49d0-b8bd-09f46f6d9655" containerID="c85c0e34814e3b16d0d2a74298591f13da60319070e0ac804a24c3837c77ebd3" exitCode=0 Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.809268 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" event={"ID":"5c96023c-09ac-49d0-b8bd-09f46f6d9655","Type":"ContainerDied","Data":"c85c0e34814e3b16d0d2a74298591f13da60319070e0ac804a24c3837c77ebd3"} Jan 26 00:10:32 crc kubenswrapper[5124]: I0126 00:10:32.845268 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" Jan 26 00:10:32 crc kubenswrapper[5124]: W0126 00:10:32.862382 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbe4622f_e299_407d_b297_f6284d911f4e.slice/crio-2b14c5b3ed7fdc9ced9e476edd09df64bbf143e1f7bef18db240f7c6c7eba815 WatchSource:0}: Error finding container 2b14c5b3ed7fdc9ced9e476edd09df64bbf143e1f7bef18db240f7c6c7eba815: Status 404 returned error can't find the container with id 2b14c5b3ed7fdc9ced9e476edd09df64bbf143e1f7bef18db240f7c6c7eba815 Jan 26 00:10:33 crc kubenswrapper[5124]: I0126 00:10:33.366096 5124 scope.go:117] "RemoveContainer" containerID="6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32" Jan 26 00:10:33 crc kubenswrapper[5124]: E0126 00:10:33.366346 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:33 crc kubenswrapper[5124]: I0126 00:10:33.384279 5124 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 26 00:10:33 crc kubenswrapper[5124]: I0126 00:10:33.391555 5124 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 26 00:10:33 crc kubenswrapper[5124]: I0126 00:10:33.814807 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" event={"ID":"dbe4622f-e299-407d-b297-f6284d911f4e","Type":"ContainerStarted","Data":"19428037b91c91714b9fe2fd0a90ea2db8946e870bb92df056321bf7c74c3c15"} Jan 26 00:10:33 crc kubenswrapper[5124]: I0126 00:10:33.814865 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" event={"ID":"dbe4622f-e299-407d-b297-f6284d911f4e","Type":"ContainerStarted","Data":"2b14c5b3ed7fdc9ced9e476edd09df64bbf143e1f7bef18db240f7c6c7eba815"} Jan 26 00:10:33 crc kubenswrapper[5124]: I0126 00:10:33.823405 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-87scd" event={"ID":"5c96023c-09ac-49d0-b8bd-09f46f6d9655","Type":"ContainerStarted","Data":"09c8f8f9ea3428914c59f175f279239c6484559d821a99e829776767c35e17a9"} Jan 26 00:10:33 crc kubenswrapper[5124]: I0126 00:10:33.831958 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dgmrq" podStartSLOduration=92.831941614 podStartE2EDuration="1m32.831941614s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:33.831027009 +0000 UTC m=+111.739946358" watchObservedRunningTime="2026-01-26 00:10:33.831941614 +0000 UTC m=+111.740860963" Jan 26 00:10:33 crc kubenswrapper[5124]: I0126 00:10:33.862301 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-87scd" podStartSLOduration=92.862284068 podStartE2EDuration="1m32.862284068s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:33.859879124 +0000 UTC m=+111.768798483" watchObservedRunningTime="2026-01-26 00:10:33.862284068 +0000 UTC m=+111.771203417" Jan 26 00:10:33 crc kubenswrapper[5124]: I0126 00:10:33.924392 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sctbw"] Jan 26 00:10:33 crc kubenswrapper[5124]: I0126 00:10:33.924548 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:33 crc kubenswrapper[5124]: E0126 00:10:33.924676 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:34 crc kubenswrapper[5124]: I0126 00:10:34.365784 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:34 crc kubenswrapper[5124]: E0126 00:10:34.366191 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:34 crc kubenswrapper[5124]: I0126 00:10:34.365976 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:34 crc kubenswrapper[5124]: I0126 00:10:34.365784 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:34 crc kubenswrapper[5124]: E0126 00:10:34.366258 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:34 crc kubenswrapper[5124]: E0126 00:10:34.366441 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:35 crc kubenswrapper[5124]: I0126 00:10:35.364577 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:35 crc kubenswrapper[5124]: E0126 00:10:35.364743 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:36 crc kubenswrapper[5124]: I0126 00:10:36.365258 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:36 crc kubenswrapper[5124]: I0126 00:10:36.365268 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:36 crc kubenswrapper[5124]: E0126 00:10:36.365825 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:36 crc kubenswrapper[5124]: E0126 00:10:36.365913 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:36 crc kubenswrapper[5124]: I0126 00:10:36.365305 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:36 crc kubenswrapper[5124]: E0126 00:10:36.366120 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:37 crc kubenswrapper[5124]: I0126 00:10:37.365394 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:37 crc kubenswrapper[5124]: E0126 00:10:37.365621 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:38 crc kubenswrapper[5124]: I0126 00:10:38.364433 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:38 crc kubenswrapper[5124]: E0126 00:10:38.364566 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:38 crc kubenswrapper[5124]: I0126 00:10:38.364661 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:38 crc kubenswrapper[5124]: I0126 00:10:38.364715 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:38 crc kubenswrapper[5124]: E0126 00:10:38.364925 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:38 crc kubenswrapper[5124]: E0126 00:10:38.364972 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.364870 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:39 crc kubenswrapper[5124]: E0126 00:10:39.365861 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sctbw" podUID="08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.558347 5124 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.558576 5124 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.603306 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5cjkn"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.770953 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-lxzd9"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.771118 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.773377 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-s87zt"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.773482 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.775822 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.775928 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.776240 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.776314 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.776411 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-6629f"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.776498 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.777883 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.779273 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.780405 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.782031 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-25hx6"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.782207 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.782906 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.782934 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.785460 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.785869 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.786444 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.786827 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.786847 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.787768 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.787961 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.788107 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.788255 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.789262 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.789336 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.789923 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.790423 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.791052 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.791322 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.791629 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.792052 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.792241 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-fpklc"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.792387 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.792282 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.792248 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.803161 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.803871 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.804284 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.804683 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.805208 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.805408 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.805571 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.805654 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.810843 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29489760-dm2tt"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.812094 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.816198 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.827523 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.828147 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.828870 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.829080 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.829225 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.830072 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.830388 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.830610 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.831734 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.832271 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-dm2tt" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.832536 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.832715 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.832935 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.834289 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.834540 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.835800 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.837634 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.837721 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.837650 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.841355 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-v5jrb"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.841571 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.843568 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.845256 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-b7nfk"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.845493 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.846872 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.846897 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.847117 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.847326 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.847979 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.848037 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.848275 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.848303 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.848327 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.848376 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.848424 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.849065 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.849289 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.849436 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.849582 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.849960 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.850235 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.850496 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.851117 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.851283 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.851799 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.852137 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.852142 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.853225 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.854126 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.854457 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.854709 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.865238 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.880230 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.885720 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.893350 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-lvq9k"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.893786 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-config\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.893829 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-audit-dir\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.893871 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwfv8\" (UniqueName: \"kubernetes.io/projected/670e3869-615d-43d1-8b6a-e0c80cebaab9-kube-api-access-cwfv8\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.893897 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/670e3869-615d-43d1-8b6a-e0c80cebaab9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.893925 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.893959 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.893981 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.893999 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.894705 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.894823 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896265 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896327 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896320 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896446 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f3b6839d-b688-438b-bf37-fa1f421afc27-available-featuregates\") pod \"openshift-config-operator-5777786469-lxzd9\" (UID: \"f3b6839d-b688-438b-bf37-fa1f421afc27\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896521 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-audit-policies\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896570 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896629 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-client-ca\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896653 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9k8\" (UniqueName: \"kubernetes.io/projected/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-kube-api-access-8x9k8\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896684 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wj5b\" (UniqueName: \"kubernetes.io/projected/f3b6839d-b688-438b-bf37-fa1f421afc27-kube-api-access-8wj5b\") pod \"openshift-config-operator-5777786469-lxzd9\" (UID: \"f3b6839d-b688-438b-bf37-fa1f421afc27\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896704 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896721 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-serving-cert\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896747 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/670e3869-615d-43d1-8b6a-e0c80cebaab9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896766 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896788 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3b6839d-b688-438b-bf37-fa1f421afc27-serving-cert\") pod \"openshift-config-operator-5777786469-lxzd9\" (UID: \"f3b6839d-b688-438b-bf37-fa1f421afc27\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896811 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-tmp\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896829 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmp97\" (UniqueName: \"kubernetes.io/projected/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-kube-api-access-zmp97\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896853 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/670e3869-615d-43d1-8b6a-e0c80cebaab9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896871 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896892 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670e3869-615d-43d1-8b6a-e0c80cebaab9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896909 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/670e3869-615d-43d1-8b6a-e0c80cebaab9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.896939 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.897310 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.897471 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.897550 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-t5442"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.897811 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.897917 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.898148 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.898209 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.898422 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.898936 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.899008 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.900393 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-vcw8h"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.900474 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.903234 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.903341 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-vcw8h" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.907897 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.908353 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.908383 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.908427 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.908520 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.908565 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.908873 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.909038 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.909247 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.910037 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.910212 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.910332 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.912260 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.912868 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.915732 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.916245 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-ns6rw"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.916413 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.920553 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.920801 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.924652 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.924753 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.927737 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.932915 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-nsc2v"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.932985 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.933165 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.933835 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.937810 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.937933 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-nsc2v" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.940862 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.940955 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.943303 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-9jvql"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.943444 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.946429 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.948101 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.951565 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.951888 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.954775 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.958725 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-fqxww"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.958863 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.961630 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.962110 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fqxww" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.964491 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.964724 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.967302 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.967567 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.970201 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5hwt4"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.970372 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.972746 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.973244 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.973810 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.975377 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-wpz4s"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.975547 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.979931 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.980042 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.982298 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.982469 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.984927 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5cjkn"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.984947 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-6629f"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.984957 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.984967 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-25hx6"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.984976 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-lxzd9"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.984986 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-s87zt"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.984995 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.985006 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-fpklc"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.985014 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.985040 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.985049 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-v5jrb"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.985058 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.985069 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-nc9fk"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.985092 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.987741 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-n64rh"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.987833 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nc9fk" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.990284 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vp4mw"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.990458 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.992687 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kwjfc"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.992873 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.994278 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.995611 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-87k2l"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.995791 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999392 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-audit-policies\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999436 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999469 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c696bafb-e286-4dc1-8edd-860c8c0564da-serving-cert\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999578 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfdd3fba-e428-46ea-a831-e53d949c342a-serving-cert\") pod \"service-ca-operator-5b9c976747-6np67\" (UID: \"dfdd3fba-e428-46ea-a831-e53d949c342a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999618 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999640 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkh5d\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-kube-api-access-wkh5d\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999650 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999661 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/498973e3-482d-4a19-9224-c3e67efc2a20-etcd-serving-ca\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999680 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-metrics-certs\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999704 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-default-certificate\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999757 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29489760-dm2tt"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999776 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999794 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999816 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-lvq9k"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999843 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999856 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999870 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999871 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b5e4a3d-13f4-42c6-9adb-30a826411994-etcd-ca\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999885 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999921 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nc9fk"] Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999917 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trhq6\" (UniqueName: \"kubernetes.io/projected/036651d1-0c52-4454-8385-bf3f84e19378-kube-api-access-trhq6\") pod \"image-pruner-29489760-dm2tt\" (UID: \"036651d1-0c52-4454-8385-bf3f84e19378\") " pod="openshift-image-registry/image-pruner-29489760-dm2tt" Jan 26 00:10:39 crc kubenswrapper[5124]: I0126 00:10:39.999933 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-vcw8h"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:39.999944 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9496837-38dd-4e08-bf40-9a191112e42a-config\") pod \"machine-api-operator-755bb95488-6629f\" (UID: \"b9496837-38dd-4e08-bf40-9a191112e42a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:39.999950 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000031 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8x9k8\" (UniqueName: \"kubernetes.io/projected/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-kube-api-access-8x9k8\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000053 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzpmz\" (UniqueName: \"kubernetes.io/projected/498973e3-482d-4a19-9224-c3e67efc2a20-kube-api-access-gzpmz\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:39.999950 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-b7nfk"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000186 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-ns6rw"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000199 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000216 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000230 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5hwt4"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000240 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-fqxww"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000248 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000256 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-nsc2v"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000269 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000278 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000291 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000288 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-audit-policies\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000299 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000356 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000374 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n64rh"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000389 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-wpz4s"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000406 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000418 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kwjfc"] Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000507 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-installation-pull-secrets\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.000573 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:40.500550629 +0000 UTC m=+118.409470068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000726 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5q88\" (UniqueName: \"kubernetes.io/projected/2c16907d-1bcd-420c-879d-65a0552e69d3-kube-api-access-z5q88\") pod \"collect-profiles-29489760-ldpxs\" (UID: \"2c16907d-1bcd-420c-879d-65a0552e69d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000763 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/498973e3-482d-4a19-9224-c3e67efc2a20-trusted-ca-bundle\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000797 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.000897 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-serving-cert\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001054 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/670e3869-615d-43d1-8b6a-e0c80cebaab9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001079 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001109 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/23eb49a3-e378-481a-932f-83ec71b22e6d-signing-cabundle\") pod \"service-ca-74545575db-nsc2v\" (UID: \"23eb49a3-e378-481a-932f-83ec71b22e6d\") " pod="openshift-service-ca/service-ca-74545575db-nsc2v" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001169 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e93a2f69-37f1-47bc-b659-8684acf34de3-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-jkc7k\" (UID: \"e93a2f69-37f1-47bc-b659-8684acf34de3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001285 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acdc983c-4d4e-4a1e-82a3-a137fe39882a-config\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001354 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c16907d-1bcd-420c-879d-65a0552e69d3-config-volume\") pod \"collect-profiles-29489760-ldpxs\" (UID: \"2c16907d-1bcd-420c-879d-65a0552e69d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001387 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/23eb49a3-e378-481a-932f-83ec71b22e6d-signing-key\") pod \"service-ca-74545575db-nsc2v\" (UID: \"23eb49a3-e378-481a-932f-83ec71b22e6d\") " pod="openshift-service-ca/service-ca-74545575db-nsc2v" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001418 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001469 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktt45\" (UniqueName: \"kubernetes.io/projected/2e062989-8ba6-44a5-8f95-e1958da237ad-kube-api-access-ktt45\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001499 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c696bafb-e286-4dc1-8edd-860c8c0564da-etcd-client\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001518 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-config\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001552 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b5e4a3d-13f4-42c6-9adb-30a826411994-etcd-service-ca\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001596 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-trusted-ca\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001618 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/498973e3-482d-4a19-9224-c3e67efc2a20-serving-cert\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001715 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2046c412-f2fc-4d3e-97c7-fa57c6683752-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001763 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cwfv8\" (UniqueName: \"kubernetes.io/projected/670e3869-615d-43d1-8b6a-e0c80cebaab9-kube-api-access-cwfv8\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001792 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-audit\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001850 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001877 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt7vt\" (UniqueName: \"kubernetes.io/projected/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-kube-api-access-wt7vt\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.001973 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/670e3869-615d-43d1-8b6a-e0c80cebaab9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002012 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-config\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002037 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c696bafb-e286-4dc1-8edd-860c8c0564da-node-pullsecrets\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002103 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjs99\" (UniqueName: \"kubernetes.io/projected/23eb49a3-e378-481a-932f-83ec71b22e6d-kube-api-access-hjs99\") pod \"service-ca-74545575db-nsc2v\" (UID: \"23eb49a3-e378-481a-932f-83ec71b22e6d\") " pod="openshift-service-ca/service-ca-74545575db-nsc2v" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002149 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s7hk\" (UniqueName: \"kubernetes.io/projected/a219f23e-815a-42e8-82a6-941d1624c7d7-kube-api-access-8s7hk\") pod \"downloads-747b44746d-vcw8h\" (UID: \"a219f23e-815a-42e8-82a6-941d1624c7d7\") " pod="openshift-console/downloads-747b44746d-vcw8h" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002190 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9496837-38dd-4e08-bf40-9a191112e42a-images\") pod \"machine-api-operator-755bb95488-6629f\" (UID: \"b9496837-38dd-4e08-bf40-9a191112e42a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002211 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhts4\" (UniqueName: \"kubernetes.io/projected/b9496837-38dd-4e08-bf40-9a191112e42a-kube-api-access-rhts4\") pod \"machine-api-operator-755bb95488-6629f\" (UID: \"b9496837-38dd-4e08-bf40-9a191112e42a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002232 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7cvw\" (UniqueName: \"kubernetes.io/projected/acdc983c-4d4e-4a1e-82a3-a137fe39882a-kube-api-access-z7cvw\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002261 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5205d539-f164-46b4-858c-9ca958a1102a-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-xzm9l\" (UID: \"5205d539-f164-46b4-858c-9ca958a1102a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002305 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002340 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acdc983c-4d4e-4a1e-82a3-a137fe39882a-tmp\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002314 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/670e3869-615d-43d1-8b6a-e0c80cebaab9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002371 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002475 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec000458-4225-4aa1-b22e-244d7d137c9e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-vfn25\" (UID: \"ec000458-4225-4aa1-b22e-244d7d137c9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002507 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwdmt\" (UniqueName: \"kubernetes.io/projected/2046c412-f2fc-4d3e-97c7-fa57c6683752-kube-api-access-jwdmt\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002534 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1-serving-cert\") pod \"kube-apiserver-operator-575994946d-csld6\" (UID: \"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002553 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-registry-tls\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002573 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/973d580d-7e62-419e-be96-115733ca98bf-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002614 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002694 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002717 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-bound-sa-token\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002741 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f3b6839d-b688-438b-bf37-fa1f421afc27-available-featuregates\") pod \"openshift-config-operator-5777786469-lxzd9\" (UID: \"f3b6839d-b688-438b-bf37-fa1f421afc27\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002788 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b5e4a3d-13f4-42c6-9adb-30a826411994-serving-cert\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002808 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfdd3fba-e428-46ea-a831-e53d949c342a-config\") pod \"service-ca-operator-5b9c976747-6np67\" (UID: \"dfdd3fba-e428-46ea-a831-e53d949c342a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002871 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-client-ca\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002888 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93a2f69-37f1-47bc-b659-8684acf34de3-config\") pod \"openshift-apiserver-operator-846cbfc458-jkc7k\" (UID: \"e93a2f69-37f1-47bc-b659-8684acf34de3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002908 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-ca-trust-extracted\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002925 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c16907d-1bcd-420c-879d-65a0552e69d3-secret-volume\") pod \"collect-profiles-29489760-ldpxs\" (UID: \"2c16907d-1bcd-420c-879d-65a0552e69d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002950 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/498973e3-482d-4a19-9224-c3e67efc2a20-etcd-client\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002970 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acdc983c-4d4e-4a1e-82a3-a137fe39882a-client-ca\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.002986 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2046c412-f2fc-4d3e-97c7-fa57c6683752-config\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003004 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5205d539-f164-46b4-858c-9ca958a1102a-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-xzm9l\" (UID: \"5205d539-f164-46b4-858c-9ca958a1102a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003038 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b5e4a3d-13f4-42c6-9adb-30a826411994-etcd-client\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003079 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a006121-cc9c-46f5-98db-14148f556b11-profile-collector-cert\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003104 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wj5b\" (UniqueName: \"kubernetes.io/projected/f3b6839d-b688-438b-bf37-fa1f421afc27-kube-api-access-8wj5b\") pod \"openshift-config-operator-5777786469-lxzd9\" (UID: \"f3b6839d-b688-438b-bf37-fa1f421afc27\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003134 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/498973e3-482d-4a19-9224-c3e67efc2a20-audit-policies\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003174 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/036651d1-0c52-4454-8385-bf3f84e19378-serviceca\") pod \"image-pruner-29489760-dm2tt\" (UID: \"036651d1-0c52-4454-8385-bf3f84e19378\") " pod="openshift-image-registry/image-pruner-29489760-dm2tt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003192 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-csi-data-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003210 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3b6839d-b688-438b-bf37-fa1f421afc27-serving-cert\") pod \"openshift-config-operator-5777786469-lxzd9\" (UID: \"f3b6839d-b688-438b-bf37-fa1f421afc27\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003227 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-tmp\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003245 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zmp97\" (UniqueName: \"kubernetes.io/projected/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-kube-api-access-zmp97\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003264 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-service-ca-bundle\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003279 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5205d539-f164-46b4-858c-9ca958a1102a-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-xzm9l\" (UID: \"5205d539-f164-46b4-858c-9ca958a1102a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003299 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a006121-cc9c-46f5-98db-14148f556b11-tmpfs\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003317 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/670e3869-615d-43d1-8b6a-e0c80cebaab9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003334 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003351 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670e3869-615d-43d1-8b6a-e0c80cebaab9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003368 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/670e3869-615d-43d1-8b6a-e0c80cebaab9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003422 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003449 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003792 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-client-ca\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003830 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fccv\" (UniqueName: \"kubernetes.io/projected/ec000458-4225-4aa1-b22e-244d7d137c9e-kube-api-access-2fccv\") pod \"kube-storage-version-migrator-operator-565b79b866-vfn25\" (UID: \"ec000458-4225-4aa1-b22e-244d7d137c9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003856 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8h4t\" (UniqueName: \"kubernetes.io/projected/5205d539-f164-46b4-858c-9ca958a1102a-kube-api-access-d8h4t\") pod \"ingress-operator-6b9cb4dbcf-xzm9l\" (UID: \"5205d539-f164-46b4-858c-9ca958a1102a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003861 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f3b6839d-b688-438b-bf37-fa1f421afc27-available-featuregates\") pod \"openshift-config-operator-5777786469-lxzd9\" (UID: \"f3b6839d-b688-438b-bf37-fa1f421afc27\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003875 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/460f5edc-0e33-44ee-b8ad-41e51e22924a-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-kpn7g\" (UID: \"460f5edc-0e33-44ee-b8ad-41e51e22924a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003895 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c696bafb-e286-4dc1-8edd-860c8c0564da-audit-dir\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003912 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-stats-auth\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003946 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsxbj\" (UniqueName: \"kubernetes.io/projected/6b5e4a3d-13f4-42c6-9adb-30a826411994-kube-api-access-rsxbj\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003973 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/973d580d-7e62-419e-be96-115733ca98bf-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.003992 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-config\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.004009 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1-tmp-dir\") pod \"kube-apiserver-operator-575994946d-csld6\" (UID: \"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.004027 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qswl\" (UniqueName: \"kubernetes.io/projected/dfdd3fba-e428-46ea-a831-e53d949c342a-kube-api-access-4qswl\") pod \"service-ca-operator-5b9c976747-6np67\" (UID: \"dfdd3fba-e428-46ea-a831-e53d949c342a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.004597 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.005435 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.005502 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec000458-4225-4aa1-b22e-244d7d137c9e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-vfn25\" (UID: \"ec000458-4225-4aa1-b22e-244d7d137c9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.005547 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-audit-dir\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.005696 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.005709 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670e3869-615d-43d1-8b6a-e0c80cebaab9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.005824 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-registry-certificates\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.006268 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-tmp\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.006424 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a006121-cc9c-46f5-98db-14148f556b11-srv-cert\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.007116 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/670e3869-615d-43d1-8b6a-e0c80cebaab9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.007195 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-audit-dir\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.007308 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2046c412-f2fc-4d3e-97c7-fa57c6683752-serving-cert\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.007378 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1-kube-api-access\") pod \"kube-apiserver-operator-575994946d-csld6\" (UID: \"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.007559 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c696bafb-e286-4dc1-8edd-860c8c0564da-encryption-config\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.007863 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.007945 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.008092 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/498973e3-482d-4a19-9224-c3e67efc2a20-audit-dir\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.008144 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-image-import-ca\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.008179 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acdc983c-4d4e-4a1e-82a3-a137fe39882a-serving-cert\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.008224 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-socket-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.008276 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-mountpoint-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.008334 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-config\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.008530 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7xrk\" (UniqueName: \"kubernetes.io/projected/e93a2f69-37f1-47bc-b659-8684acf34de3-kube-api-access-w7xrk\") pod \"openshift-apiserver-operator-846cbfc458-jkc7k\" (UID: \"e93a2f69-37f1-47bc-b659-8684acf34de3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.008978 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2046c412-f2fc-4d3e-97c7-fa57c6683752-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009045 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009072 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b5e4a3d-13f4-42c6-9adb-30a826411994-config\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009112 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009109 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-serving-cert\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009177 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/498973e3-482d-4a19-9224-c3e67efc2a20-encryption-config\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009209 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b5e4a3d-13f4-42c6-9adb-30a826411994-tmp-dir\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009293 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009322 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1-config\") pod \"kube-apiserver-operator-575994946d-csld6\" (UID: \"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009354 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9496837-38dd-4e08-bf40-9a191112e42a-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-6629f\" (UID: \"b9496837-38dd-4e08-bf40-9a191112e42a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009378 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r95zt\" (UniqueName: \"kubernetes.io/projected/460f5edc-0e33-44ee-b8ad-41e51e22924a-kube-api-access-r95zt\") pod \"package-server-manager-77f986bd66-kpn7g\" (UID: \"460f5edc-0e33-44ee-b8ad-41e51e22924a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009461 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009740 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.010001 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clbcp\" (UniqueName: \"kubernetes.io/projected/973d580d-7e62-419e-be96-115733ca98bf-kube-api-access-clbcp\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.009871 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.010053 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-config\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.010074 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9c7t\" (UniqueName: \"kubernetes.io/projected/c696bafb-e286-4dc1-8edd-860c8c0564da-kube-api-access-k9c7t\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.010163 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/973d580d-7e62-419e-be96-115733ca98bf-tmp\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.010224 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.010289 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.010305 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-registration-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.010518 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-plugins-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.010558 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5wpm\" (UniqueName: \"kubernetes.io/projected/8a006121-cc9c-46f5-98db-14148f556b11-kube-api-access-d5wpm\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.010825 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.013800 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.013890 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.015914 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3b6839d-b688-438b-bf37-fa1f421afc27-serving-cert\") pod \"openshift-config-operator-5777786469-lxzd9\" (UID: \"f3b6839d-b688-438b-bf37-fa1f421afc27\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.016255 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/670e3869-615d-43d1-8b6a-e0c80cebaab9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.017271 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.017714 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.018237 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.019872 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.020499 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.033824 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.053820 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.073515 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.094078 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.111520 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.111722 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e93a2f69-37f1-47bc-b659-8684acf34de3-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-jkc7k\" (UID: \"e93a2f69-37f1-47bc-b659-8684acf34de3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.111799 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:40.611742116 +0000 UTC m=+118.520661475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.111914 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acdc983c-4d4e-4a1e-82a3-a137fe39882a-config\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.112021 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtvh7\" (UniqueName: \"kubernetes.io/projected/fa9082e9-a8a6-433b-97ca-70128b99d6b7-kube-api-access-gtvh7\") pod \"machine-config-server-87k2l\" (UID: \"fa9082e9-a8a6-433b-97ca-70128b99d6b7\") " pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.112739 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e811bf67-7a6d-4279-bbff-b2cf02f66558-tmpfs\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.112862 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/288efdc1-c138-42d5-9416-5c9d0faaa831-service-ca\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.113390 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.113463 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c16907d-1bcd-420c-879d-65a0552e69d3-config-volume\") pod \"collect-profiles-29489760-ldpxs\" (UID: \"2c16907d-1bcd-420c-879d-65a0552e69d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.114639 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/23eb49a3-e378-481a-932f-83ec71b22e6d-signing-key\") pod \"service-ca-74545575db-nsc2v\" (UID: \"23eb49a3-e378-481a-932f-83ec71b22e6d\") " pod="openshift-service-ca/service-ca-74545575db-nsc2v" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.114727 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.114221 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acdc983c-4d4e-4a1e-82a3-a137fe39882a-config\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.115013 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ktt45\" (UniqueName: \"kubernetes.io/projected/2e062989-8ba6-44a5-8f95-e1958da237ad-kube-api-access-ktt45\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.115177 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c696bafb-e286-4dc1-8edd-860c8c0564da-etcd-client\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.115774 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-config\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.115827 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7n7n\" (UniqueName: \"kubernetes.io/projected/d76339a3-5850-4e27-be40-03180dc8e526-kube-api-access-g7n7n\") pod \"dns-operator-799b87ffcd-lvq9k\" (UID: \"d76339a3-5850-4e27-be40-03180dc8e526\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.115864 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnrkm\" (UniqueName: \"kubernetes.io/projected/1185cd69-7c6a-46f0-acf1-64d587996124-kube-api-access-qnrkm\") pod \"machine-approver-54c688565-t5442\" (UID: \"1185cd69-7c6a-46f0-acf1-64d587996124\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.115951 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b5e4a3d-13f4-42c6-9adb-30a826411994-etcd-service-ca\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.115988 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-trusted-ca\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116014 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/498973e3-482d-4a19-9224-c3e67efc2a20-serving-cert\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116036 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2046c412-f2fc-4d3e-97c7-fa57c6683752-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116063 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-audit\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116089 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116571 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wt7vt\" (UniqueName: \"kubernetes.io/projected/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-kube-api-access-wt7vt\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116651 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/288efdc1-c138-42d5-9416-5c9d0faaa831-console-oauth-config\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116778 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d76339a3-5850-4e27-be40-03180dc8e526-tmp-dir\") pod \"dns-operator-799b87ffcd-lvq9k\" (UID: \"d76339a3-5850-4e27-be40-03180dc8e526\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116820 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-config\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116839 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a69d5905-85d8-49b8-ab54-15fc8f104c31-ready\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116856 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c696bafb-e286-4dc1-8edd-860c8c0564da-node-pullsecrets\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116872 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hjs99\" (UniqueName: \"kubernetes.io/projected/23eb49a3-e378-481a-932f-83ec71b22e6d-kube-api-access-hjs99\") pod \"service-ca-74545575db-nsc2v\" (UID: \"23eb49a3-e378-481a-932f-83ec71b22e6d\") " pod="openshift-service-ca/service-ca-74545575db-nsc2v" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116890 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8s7hk\" (UniqueName: \"kubernetes.io/projected/a219f23e-815a-42e8-82a6-941d1624c7d7-kube-api-access-8s7hk\") pod \"downloads-747b44746d-vcw8h\" (UID: \"a219f23e-815a-42e8-82a6-941d1624c7d7\") " pod="openshift-console/downloads-747b44746d-vcw8h" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116906 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9496837-38dd-4e08-bf40-9a191112e42a-images\") pod \"machine-api-operator-755bb95488-6629f\" (UID: \"b9496837-38dd-4e08-bf40-9a191112e42a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116924 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rhts4\" (UniqueName: \"kubernetes.io/projected/b9496837-38dd-4e08-bf40-9a191112e42a-kube-api-access-rhts4\") pod \"machine-api-operator-755bb95488-6629f\" (UID: \"b9496837-38dd-4e08-bf40-9a191112e42a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116947 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z7cvw\" (UniqueName: \"kubernetes.io/projected/acdc983c-4d4e-4a1e-82a3-a137fe39882a-kube-api-access-z7cvw\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.116999 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5205d539-f164-46b4-858c-9ca958a1102a-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-xzm9l\" (UID: \"5205d539-f164-46b4-858c-9ca958a1102a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117017 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxlwg\" (UniqueName: \"kubernetes.io/projected/a69d5905-85d8-49b8-ab54-15fc8f104c31-kube-api-access-fxlwg\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117034 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acdc983c-4d4e-4a1e-82a3-a137fe39882a-tmp\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117056 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b14632cd-c5f4-41b7-be2f-71d6f7f2c264-config\") pod \"openshift-controller-manager-operator-686468bdd5-zfncw\" (UID: \"b14632cd-c5f4-41b7-be2f-71d6f7f2c264\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117074 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b3bd69c-7b97-42bb-9f12-7d690416e91f-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-rb8jj\" (UID: \"1b3bd69c-7b97-42bb-9f12-7d690416e91f\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117143 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec000458-4225-4aa1-b22e-244d7d137c9e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-vfn25\" (UID: \"ec000458-4225-4aa1-b22e-244d7d137c9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117198 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jwdmt\" (UniqueName: \"kubernetes.io/projected/2046c412-f2fc-4d3e-97c7-fa57c6683752-kube-api-access-jwdmt\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117220 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1-serving-cert\") pod \"kube-apiserver-operator-575994946d-csld6\" (UID: \"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117256 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e811bf67-7a6d-4279-bbff-b2cf02f66558-srv-cert\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117320 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-registry-tls\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117353 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/973d580d-7e62-419e-be96-115733ca98bf-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117383 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117471 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c696bafb-e286-4dc1-8edd-860c8c0564da-node-pullsecrets\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117511 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-audit\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117546 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b5e4a3d-13f4-42c6-9adb-30a826411994-etcd-service-ca\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117517 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-bound-sa-token\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117657 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b5e4a3d-13f4-42c6-9adb-30a826411994-serving-cert\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117696 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfdd3fba-e428-46ea-a831-e53d949c342a-config\") pod \"service-ca-operator-5b9c976747-6np67\" (UID: \"dfdd3fba-e428-46ea-a831-e53d949c342a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117731 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrs9s\" (UniqueName: \"kubernetes.io/projected/b14632cd-c5f4-41b7-be2f-71d6f7f2c264-kube-api-access-rrs9s\") pod \"openshift-controller-manager-operator-686468bdd5-zfncw\" (UID: \"b14632cd-c5f4-41b7-be2f-71d6f7f2c264\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117768 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93a2f69-37f1-47bc-b659-8684acf34de3-config\") pod \"openshift-apiserver-operator-846cbfc458-jkc7k\" (UID: \"e93a2f69-37f1-47bc-b659-8684acf34de3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117778 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2046c412-f2fc-4d3e-97c7-fa57c6683752-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117788 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-ca-trust-extracted\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117856 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c16907d-1bcd-420c-879d-65a0552e69d3-secret-volume\") pod \"collect-profiles-29489760-ldpxs\" (UID: \"2c16907d-1bcd-420c-879d-65a0552e69d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117887 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mppv7\" (UniqueName: \"kubernetes.io/projected/839e8646-b712-4725-8456-806e52a3144c-kube-api-access-mppv7\") pod \"packageserver-7d4fc7d867-zbjgw\" (UID: \"839e8646-b712-4725-8456-806e52a3144c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117916 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/498973e3-482d-4a19-9224-c3e67efc2a20-etcd-client\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117944 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acdc983c-4d4e-4a1e-82a3-a137fe39882a-client-ca\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117969 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2046c412-f2fc-4d3e-97c7-fa57c6683752-config\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.117994 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5205d539-f164-46b4-858c-9ca958a1102a-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-xzm9l\" (UID: \"5205d539-f164-46b4-858c-9ca958a1102a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118017 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b5e4a3d-13f4-42c6-9adb-30a826411994-etcd-client\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118042 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e451454a-5a94-4535-823c-523ea6f6f7de-cert\") pod \"ingress-canary-nc9fk\" (UID: \"e451454a-5a94-4535-823c-523ea6f6f7de\") " pod="openshift-ingress-canary/ingress-canary-nc9fk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118065 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-ca-trust-extracted\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118067 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ad39e4e-4d41-443b-bfc7-a4ec7113664c-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-8cj7n\" (UID: \"4ad39e4e-4d41-443b-bfc7-a4ec7113664c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118122 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a006121-cc9c-46f5-98db-14148f556b11-profile-collector-cert\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118142 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/498973e3-482d-4a19-9224-c3e67efc2a20-audit-policies\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118162 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b3398b97-1658-4344-afde-a15d309846c9-tmp-dir\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118190 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/036651d1-0c52-4454-8385-bf3f84e19378-serviceca\") pod \"image-pruner-29489760-dm2tt\" (UID: \"036651d1-0c52-4454-8385-bf3f84e19378\") " pod="openshift-image-registry/image-pruner-29489760-dm2tt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118196 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-trusted-ca\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118208 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-csi-data-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118248 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c696bafb-e286-4dc1-8edd-860c8c0564da-etcd-client\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118267 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-csi-data-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118265 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4qpp\" (UniqueName: \"kubernetes.io/projected/80cd99f0-6ac5-4187-9bdd-79dde0e74a57-kube-api-access-b4qpp\") pod \"cluster-samples-operator-6b564684c8-9qgdz\" (UID: \"80cd99f0-6ac5-4187-9bdd-79dde0e74a57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.118355 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6vhx\" (UniqueName: \"kubernetes.io/projected/27a594f4-28ad-49d0-8ab7-f0c0ff14d65c-kube-api-access-z6vhx\") pod \"migrator-866fcbc849-fqxww\" (UID: \"27a594f4-28ad-49d0-8ab7-f0c0ff14d65c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fqxww" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119010 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/498973e3-482d-4a19-9224-c3e67efc2a20-audit-policies\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119090 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-service-ca-bundle\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119126 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5205d539-f164-46b4-858c-9ca958a1102a-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-xzm9l\" (UID: \"5205d539-f164-46b4-858c-9ca958a1102a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119155 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a006121-cc9c-46f5-98db-14148f556b11-tmpfs\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119183 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7bpv\" (UniqueName: \"kubernetes.io/projected/288efdc1-c138-42d5-9416-5c9d0faaa831-kube-api-access-d7bpv\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119206 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a69d5905-85d8-49b8-ab54-15fc8f104c31-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119234 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/80cd99f0-6ac5-4187-9bdd-79dde0e74a57-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-9qgdz\" (UID: \"80cd99f0-6ac5-4187-9bdd-79dde0e74a57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119258 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1185cd69-7c6a-46f0-acf1-64d587996124-machine-approver-tls\") pod \"machine-approver-54c688565-t5442\" (UID: \"1185cd69-7c6a-46f0-acf1-64d587996124\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119290 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2fccv\" (UniqueName: \"kubernetes.io/projected/ec000458-4225-4aa1-b22e-244d7d137c9e-kube-api-access-2fccv\") pod \"kube-storage-version-migrator-operator-565b79b866-vfn25\" (UID: \"ec000458-4225-4aa1-b22e-244d7d137c9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119291 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2046c412-f2fc-4d3e-97c7-fa57c6683752-config\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119316 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d8h4t\" (UniqueName: \"kubernetes.io/projected/5205d539-f164-46b4-858c-9ca958a1102a-kube-api-access-d8h4t\") pod \"ingress-operator-6b9cb4dbcf-xzm9l\" (UID: \"5205d539-f164-46b4-858c-9ca958a1102a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119341 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1185cd69-7c6a-46f0-acf1-64d587996124-config\") pod \"machine-approver-54c688565-t5442\" (UID: \"1185cd69-7c6a-46f0-acf1-64d587996124\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119367 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqtl7\" (UniqueName: \"kubernetes.io/projected/cf1e5da6-8866-4e4d-bafe-84bc0f76c41f-kube-api-access-jqtl7\") pod \"multus-admission-controller-69db94689b-wpz4s\" (UID: \"cf1e5da6-8866-4e4d-bafe-84bc0f76c41f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119398 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/460f5edc-0e33-44ee-b8ad-41e51e22924a-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-kpn7g\" (UID: \"460f5edc-0e33-44ee-b8ad-41e51e22924a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119423 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b14632cd-c5f4-41b7-be2f-71d6f7f2c264-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-zfncw\" (UID: \"b14632cd-c5f4-41b7-be2f-71d6f7f2c264\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119448 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c696bafb-e286-4dc1-8edd-860c8c0564da-audit-dir\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119470 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-stats-auth\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119497 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rsxbj\" (UniqueName: \"kubernetes.io/projected/6b5e4a3d-13f4-42c6-9adb-30a826411994-kube-api-access-rsxbj\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119527 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/973d580d-7e62-419e-be96-115733ca98bf-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119551 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3398b97-1658-4344-afde-a15d309846c9-metrics-tls\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119575 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l6sb\" (UniqueName: \"kubernetes.io/projected/e811bf67-7a6d-4279-bbff-b2cf02f66558-kube-api-access-2l6sb\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119619 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1185cd69-7c6a-46f0-acf1-64d587996124-auth-proxy-config\") pod \"machine-approver-54c688565-t5442\" (UID: \"1185cd69-7c6a-46f0-acf1-64d587996124\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119652 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1-tmp-dir\") pod \"kube-apiserver-operator-575994946d-csld6\" (UID: \"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119677 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4qswl\" (UniqueName: \"kubernetes.io/projected/dfdd3fba-e428-46ea-a831-e53d949c342a-kube-api-access-4qswl\") pod \"service-ca-operator-5b9c976747-6np67\" (UID: \"dfdd3fba-e428-46ea-a831-e53d949c342a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.119823 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120083 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9496837-38dd-4e08-bf40-9a191112e42a-images\") pod \"machine-api-operator-755bb95488-6629f\" (UID: \"b9496837-38dd-4e08-bf40-9a191112e42a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120161 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c696bafb-e286-4dc1-8edd-860c8c0564da-audit-dir\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120212 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acdc983c-4d4e-4a1e-82a3-a137fe39882a-tmp\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120363 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec000458-4225-4aa1-b22e-244d7d137c9e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-vfn25\" (UID: \"ec000458-4225-4aa1-b22e-244d7d137c9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120417 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/036651d1-0c52-4454-8385-bf3f84e19378-serviceca\") pod \"image-pruner-29489760-dm2tt\" (UID: \"036651d1-0c52-4454-8385-bf3f84e19378\") " pod="openshift-image-registry/image-pruner-29489760-dm2tt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120433 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lft8d\" (UniqueName: \"kubernetes.io/projected/4ad39e4e-4d41-443b-bfc7-a4ec7113664c-kube-api-access-lft8d\") pod \"machine-config-operator-67c9d58cbb-8cj7n\" (UID: \"4ad39e4e-4d41-443b-bfc7-a4ec7113664c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120531 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-registry-tls\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120529 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-registry-certificates\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120542 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120607 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a006121-cc9c-46f5-98db-14148f556b11-tmpfs\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120619 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a006121-cc9c-46f5-98db-14148f556b11-srv-cert\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120677 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf6g2\" (UniqueName: \"kubernetes.io/projected/e451454a-5a94-4535-823c-523ea6f6f7de-kube-api-access-kf6g2\") pod \"ingress-canary-nc9fk\" (UID: \"e451454a-5a94-4535-823c-523ea6f6f7de\") " pod="openshift-ingress-canary/ingress-canary-nc9fk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120781 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2046c412-f2fc-4d3e-97c7-fa57c6683752-serving-cert\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120774 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1-tmp-dir\") pod \"kube-apiserver-operator-575994946d-csld6\" (UID: \"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120832 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1-kube-api-access\") pod \"kube-apiserver-operator-575994946d-csld6\" (UID: \"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120873 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/839e8646-b712-4725-8456-806e52a3144c-tmpfs\") pod \"packageserver-7d4fc7d867-zbjgw\" (UID: \"839e8646-b712-4725-8456-806e52a3144c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120901 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/288efdc1-c138-42d5-9416-5c9d0faaa831-trusted-ca-bundle\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.120934 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d76339a3-5850-4e27-be40-03180dc8e526-metrics-tls\") pod \"dns-operator-799b87ffcd-lvq9k\" (UID: \"d76339a3-5850-4e27-be40-03180dc8e526\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121033 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c696bafb-e286-4dc1-8edd-860c8c0564da-encryption-config\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121094 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f09670d-b0a1-4fa2-9d30-7b82c260e38d-config\") pod \"console-operator-67c89758df-ns6rw\" (UID: \"9f09670d-b0a1-4fa2-9d30-7b82c260e38d\") " pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121138 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f09670d-b0a1-4fa2-9d30-7b82c260e38d-serving-cert\") pod \"console-operator-67c89758df-ns6rw\" (UID: \"9f09670d-b0a1-4fa2-9d30-7b82c260e38d\") " pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121181 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f09670d-b0a1-4fa2-9d30-7b82c260e38d-trusted-ca\") pod \"console-operator-67c89758df-ns6rw\" (UID: \"9f09670d-b0a1-4fa2-9d30-7b82c260e38d\") " pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121160 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acdc983c-4d4e-4a1e-82a3-a137fe39882a-client-ca\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121230 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/288efdc1-c138-42d5-9416-5c9d0faaa831-oauth-serving-cert\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121277 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121318 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b3bd69c-7b97-42bb-9f12-7d690416e91f-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-rb8jj\" (UID: \"1b3bd69c-7b97-42bb-9f12-7d690416e91f\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121371 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/498973e3-482d-4a19-9224-c3e67efc2a20-audit-dir\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121418 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-image-import-ca\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121434 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b5e4a3d-13f4-42c6-9adb-30a826411994-etcd-client\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121460 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/498973e3-482d-4a19-9224-c3e67efc2a20-audit-dir\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121463 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acdc983c-4d4e-4a1e-82a3-a137fe39882a-serving-cert\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121500 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-socket-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121519 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-mountpoint-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121538 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/839e8646-b712-4725-8456-806e52a3144c-apiservice-cert\") pod \"packageserver-7d4fc7d867-zbjgw\" (UID: \"839e8646-b712-4725-8456-806e52a3144c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121569 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-config\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121601 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/839e8646-b712-4725-8456-806e52a3144c-webhook-cert\") pod \"packageserver-7d4fc7d867-zbjgw\" (UID: \"839e8646-b712-4725-8456-806e52a3144c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121621 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-registry-certificates\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121626 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w7xrk\" (UniqueName: \"kubernetes.io/projected/e93a2f69-37f1-47bc-b659-8684acf34de3-kube-api-access-w7xrk\") pod \"openshift-apiserver-operator-846cbfc458-jkc7k\" (UID: \"e93a2f69-37f1-47bc-b659-8684acf34de3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121684 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2046c412-f2fc-4d3e-97c7-fa57c6683752-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121710 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121730 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b5e4a3d-13f4-42c6-9adb-30a826411994-config\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121758 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/498973e3-482d-4a19-9224-c3e67efc2a20-encryption-config\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121776 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b5e4a3d-13f4-42c6-9adb-30a826411994-tmp-dir\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121797 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b14632cd-c5f4-41b7-be2f-71d6f7f2c264-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-zfncw\" (UID: \"b14632cd-c5f4-41b7-be2f-71d6f7f2c264\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121828 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1-config\") pod \"kube-apiserver-operator-575994946d-csld6\" (UID: \"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121863 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9496837-38dd-4e08-bf40-9a191112e42a-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-6629f\" (UID: \"b9496837-38dd-4e08-bf40-9a191112e42a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121901 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r95zt\" (UniqueName: \"kubernetes.io/projected/460f5edc-0e33-44ee-b8ad-41e51e22924a-kube-api-access-r95zt\") pod \"package-server-manager-77f986bd66-kpn7g\" (UID: \"460f5edc-0e33-44ee-b8ad-41e51e22924a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121914 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-socket-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121930 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpcz4\" (UniqueName: \"kubernetes.io/projected/b3398b97-1658-4344-afde-a15d309846c9-kube-api-access-lpcz4\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121957 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-mountpoint-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.121972 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.122002 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.122205 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-clbcp\" (UniqueName: \"kubernetes.io/projected/973d580d-7e62-419e-be96-115733ca98bf-kube-api-access-clbcp\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.122575 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-config\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.122635 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/498973e3-482d-4a19-9224-c3e67efc2a20-serving-cert\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.122656 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-image-import-ca\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123089 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e811bf67-7a6d-4279-bbff-b2cf02f66558-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123126 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k9c7t\" (UniqueName: \"kubernetes.io/projected/c696bafb-e286-4dc1-8edd-860c8c0564da-kube-api-access-k9c7t\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123146 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/973d580d-7e62-419e-be96-115733ca98bf-tmp\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123172 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-registration-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123193 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-plugins-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123215 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9sz\" (UniqueName: \"kubernetes.io/projected/9f09670d-b0a1-4fa2-9d30-7b82c260e38d-kube-api-access-nw9sz\") pod \"console-operator-67c89758df-ns6rw\" (UID: \"9f09670d-b0a1-4fa2-9d30-7b82c260e38d\") " pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123236 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d5wpm\" (UniqueName: \"kubernetes.io/projected/8a006121-cc9c-46f5-98db-14148f556b11-kube-api-access-d5wpm\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123246 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93a2f69-37f1-47bc-b659-8684acf34de3-config\") pod \"openshift-apiserver-operator-846cbfc458-jkc7k\" (UID: \"e93a2f69-37f1-47bc-b659-8684acf34de3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123298 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b5e4a3d-13f4-42c6-9adb-30a826411994-tmp-dir\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123410 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123452 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c696bafb-e286-4dc1-8edd-860c8c0564da-serving-cert\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123480 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfdd3fba-e428-46ea-a831-e53d949c342a-serving-cert\") pod \"service-ca-operator-5b9c976747-6np67\" (UID: \"dfdd3fba-e428-46ea-a831-e53d949c342a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123489 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b5e4a3d-13f4-42c6-9adb-30a826411994-config\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123518 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wkh5d\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-kube-api-access-wkh5d\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123547 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/498973e3-482d-4a19-9224-c3e67efc2a20-etcd-serving-ca\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123575 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-metrics-certs\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123629 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-default-certificate\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123656 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b5e4a3d-13f4-42c6-9adb-30a826411994-etcd-ca\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123684 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4ad39e4e-4d41-443b-bfc7-a4ec7113664c-images\") pod \"machine-config-operator-67c9d58cbb-8cj7n\" (UID: \"4ad39e4e-4d41-443b-bfc7-a4ec7113664c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123660 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-plugins-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123715 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trhq6\" (UniqueName: \"kubernetes.io/projected/036651d1-0c52-4454-8385-bf3f84e19378-kube-api-access-trhq6\") pod \"image-pruner-29489760-dm2tt\" (UID: \"036651d1-0c52-4454-8385-bf3f84e19378\") " pod="openshift-image-registry/image-pruner-29489760-dm2tt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123742 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9496837-38dd-4e08-bf40-9a191112e42a-config\") pod \"machine-api-operator-755bb95488-6629f\" (UID: \"b9496837-38dd-4e08-bf40-9a191112e42a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123765 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/288efdc1-c138-42d5-9416-5c9d0faaa831-console-config\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123748 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e062989-8ba6-44a5-8f95-e1958da237ad-registration-dir\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123891 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123905 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fa9082e9-a8a6-433b-97ca-70128b99d6b7-certs\") pod \"machine-config-server-87k2l\" (UID: \"fa9082e9-a8a6-433b-97ca-70128b99d6b7\") " pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123964 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gzpmz\" (UniqueName: \"kubernetes.io/projected/498973e3-482d-4a19-9224-c3e67efc2a20-kube-api-access-gzpmz\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.123996 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/973d580d-7e62-419e-be96-115733ca98bf-tmp\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124053 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fa9082e9-a8a6-433b-97ca-70128b99d6b7-node-bootstrap-token\") pod \"machine-config-server-87k2l\" (UID: \"fa9082e9-a8a6-433b-97ca-70128b99d6b7\") " pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124105 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3a1a33e-2dab-43f6-8c34-6ac84e05eb03-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2xm5v\" (UID: \"b3a1a33e-2dab-43f6-8c34-6ac84e05eb03\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124136 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-installation-pull-secrets\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124157 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z5q88\" (UniqueName: \"kubernetes.io/projected/2c16907d-1bcd-420c-879d-65a0552e69d3-kube-api-access-z5q88\") pod \"collect-profiles-29489760-ldpxs\" (UID: \"2c16907d-1bcd-420c-879d-65a0552e69d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124176 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3398b97-1658-4344-afde-a15d309846c9-config-volume\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124196 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx6cs\" (UniqueName: \"kubernetes.io/projected/b3a1a33e-2dab-43f6-8c34-6ac84e05eb03-kube-api-access-cx6cs\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2xm5v\" (UID: \"b3a1a33e-2dab-43f6-8c34-6ac84e05eb03\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124223 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/498973e3-482d-4a19-9224-c3e67efc2a20-trusted-ca-bundle\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124245 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/288efdc1-c138-42d5-9416-5c9d0faaa831-console-serving-cert\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124268 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc8wm\" (UniqueName: \"kubernetes.io/projected/1b3bd69c-7b97-42bb-9f12-7d690416e91f-kube-api-access-gc8wm\") pod \"machine-config-controller-f9cdd68f7-rb8jj\" (UID: \"1b3bd69c-7b97-42bb-9f12-7d690416e91f\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124393 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2046c412-f2fc-4d3e-97c7-fa57c6683752-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.124664 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:40.624645349 +0000 UTC m=+118.533564718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124671 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/498973e3-482d-4a19-9224-c3e67efc2a20-etcd-serving-ca\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124755 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ad39e4e-4d41-443b-bfc7-a4ec7113664c-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-8cj7n\" (UID: \"4ad39e4e-4d41-443b-bfc7-a4ec7113664c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124793 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a69d5905-85d8-49b8-ab54-15fc8f104c31-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124828 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf1e5da6-8866-4e4d-bafe-84bc0f76c41f-webhook-certs\") pod \"multus-admission-controller-69db94689b-wpz4s\" (UID: \"cf1e5da6-8866-4e4d-bafe-84bc0f76c41f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.124886 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.125036 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/23eb49a3-e378-481a-932f-83ec71b22e6d-signing-cabundle\") pod \"service-ca-74545575db-nsc2v\" (UID: \"23eb49a3-e378-481a-932f-83ec71b22e6d\") " pod="openshift-service-ca/service-ca-74545575db-nsc2v" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.125124 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e93a2f69-37f1-47bc-b659-8684acf34de3-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-jkc7k\" (UID: \"e93a2f69-37f1-47bc-b659-8684acf34de3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.125318 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/498973e3-482d-4a19-9224-c3e67efc2a20-trusted-ca-bundle\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.125349 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b5e4a3d-13f4-42c6-9adb-30a826411994-etcd-ca\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.125560 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9496837-38dd-4e08-bf40-9a191112e42a-config\") pod \"machine-api-operator-755bb95488-6629f\" (UID: \"b9496837-38dd-4e08-bf40-9a191112e42a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.126118 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c696bafb-e286-4dc1-8edd-860c8c0564da-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.126306 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/498973e3-482d-4a19-9224-c3e67efc2a20-etcd-client\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.127008 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2046c412-f2fc-4d3e-97c7-fa57c6683752-serving-cert\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.127356 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/498973e3-482d-4a19-9224-c3e67efc2a20-encryption-config\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.127724 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c696bafb-e286-4dc1-8edd-860c8c0564da-encryption-config\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.128832 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9496837-38dd-4e08-bf40-9a191112e42a-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-6629f\" (UID: \"b9496837-38dd-4e08-bf40-9a191112e42a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.129176 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-installation-pull-secrets\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.129402 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b5e4a3d-13f4-42c6-9adb-30a826411994-serving-cert\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.129616 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c696bafb-e286-4dc1-8edd-860c8c0564da-serving-cert\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.130040 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acdc983c-4d4e-4a1e-82a3-a137fe39882a-serving-cert\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.132780 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.153432 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.163193 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c16907d-1bcd-420c-879d-65a0552e69d3-secret-volume\") pod \"collect-profiles-29489760-ldpxs\" (UID: \"2c16907d-1bcd-420c-879d-65a0552e69d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.163376 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a006121-cc9c-46f5-98db-14148f556b11-profile-collector-cert\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.174416 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.184881 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c16907d-1bcd-420c-879d-65a0552e69d3-config-volume\") pod \"collect-profiles-29489760-ldpxs\" (UID: \"2c16907d-1bcd-420c-879d-65a0552e69d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.194143 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.213630 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.226615 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.226705 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:40.726677664 +0000 UTC m=+118.635597013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.227258 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d76339a3-5850-4e27-be40-03180dc8e526-tmp-dir\") pod \"dns-operator-799b87ffcd-lvq9k\" (UID: \"d76339a3-5850-4e27-be40-03180dc8e526\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.227371 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a69d5905-85d8-49b8-ab54-15fc8f104c31-ready\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.227433 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fxlwg\" (UniqueName: \"kubernetes.io/projected/a69d5905-85d8-49b8-ab54-15fc8f104c31-kube-api-access-fxlwg\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.227483 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b14632cd-c5f4-41b7-be2f-71d6f7f2c264-config\") pod \"openshift-controller-manager-operator-686468bdd5-zfncw\" (UID: \"b14632cd-c5f4-41b7-be2f-71d6f7f2c264\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.227576 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b3bd69c-7b97-42bb-9f12-7d690416e91f-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-rb8jj\" (UID: \"1b3bd69c-7b97-42bb-9f12-7d690416e91f\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.227720 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d76339a3-5850-4e27-be40-03180dc8e526-tmp-dir\") pod \"dns-operator-799b87ffcd-lvq9k\" (UID: \"d76339a3-5850-4e27-be40-03180dc8e526\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.227723 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e811bf67-7a6d-4279-bbff-b2cf02f66558-srv-cert\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.227819 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rrs9s\" (UniqueName: \"kubernetes.io/projected/b14632cd-c5f4-41b7-be2f-71d6f7f2c264-kube-api-access-rrs9s\") pod \"openshift-controller-manager-operator-686468bdd5-zfncw\" (UID: \"b14632cd-c5f4-41b7-be2f-71d6f7f2c264\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.227862 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mppv7\" (UniqueName: \"kubernetes.io/projected/839e8646-b712-4725-8456-806e52a3144c-kube-api-access-mppv7\") pod \"packageserver-7d4fc7d867-zbjgw\" (UID: \"839e8646-b712-4725-8456-806e52a3144c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.227901 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e451454a-5a94-4535-823c-523ea6f6f7de-cert\") pod \"ingress-canary-nc9fk\" (UID: \"e451454a-5a94-4535-823c-523ea6f6f7de\") " pod="openshift-ingress-canary/ingress-canary-nc9fk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.227929 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ad39e4e-4d41-443b-bfc7-a4ec7113664c-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-8cj7n\" (UID: \"4ad39e4e-4d41-443b-bfc7-a4ec7113664c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.227972 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b3398b97-1658-4344-afde-a15d309846c9-tmp-dir\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.228314 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a69d5905-85d8-49b8-ab54-15fc8f104c31-ready\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.228337 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b4qpp\" (UniqueName: \"kubernetes.io/projected/80cd99f0-6ac5-4187-9bdd-79dde0e74a57-kube-api-access-b4qpp\") pod \"cluster-samples-operator-6b564684c8-9qgdz\" (UID: \"80cd99f0-6ac5-4187-9bdd-79dde0e74a57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.228406 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z6vhx\" (UniqueName: \"kubernetes.io/projected/27a594f4-28ad-49d0-8ab7-f0c0ff14d65c-kube-api-access-z6vhx\") pod \"migrator-866fcbc849-fqxww\" (UID: \"27a594f4-28ad-49d0-8ab7-f0c0ff14d65c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fqxww" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.228615 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7bpv\" (UniqueName: \"kubernetes.io/projected/288efdc1-c138-42d5-9416-5c9d0faaa831-kube-api-access-d7bpv\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.228713 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a69d5905-85d8-49b8-ab54-15fc8f104c31-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.228756 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/80cd99f0-6ac5-4187-9bdd-79dde0e74a57-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-9qgdz\" (UID: \"80cd99f0-6ac5-4187-9bdd-79dde0e74a57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.228780 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1185cd69-7c6a-46f0-acf1-64d587996124-machine-approver-tls\") pod \"machine-approver-54c688565-t5442\" (UID: \"1185cd69-7c6a-46f0-acf1-64d587996124\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.228991 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b3398b97-1658-4344-afde-a15d309846c9-tmp-dir\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229085 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1185cd69-7c6a-46f0-acf1-64d587996124-config\") pod \"machine-approver-54c688565-t5442\" (UID: \"1185cd69-7c6a-46f0-acf1-64d587996124\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229415 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jqtl7\" (UniqueName: \"kubernetes.io/projected/cf1e5da6-8866-4e4d-bafe-84bc0f76c41f-kube-api-access-jqtl7\") pod \"multus-admission-controller-69db94689b-wpz4s\" (UID: \"cf1e5da6-8866-4e4d-bafe-84bc0f76c41f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229459 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b14632cd-c5f4-41b7-be2f-71d6f7f2c264-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-zfncw\" (UID: \"b14632cd-c5f4-41b7-be2f-71d6f7f2c264\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229491 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3398b97-1658-4344-afde-a15d309846c9-metrics-tls\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229530 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2l6sb\" (UniqueName: \"kubernetes.io/projected/e811bf67-7a6d-4279-bbff-b2cf02f66558-kube-api-access-2l6sb\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229551 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1185cd69-7c6a-46f0-acf1-64d587996124-auth-proxy-config\") pod \"machine-approver-54c688565-t5442\" (UID: \"1185cd69-7c6a-46f0-acf1-64d587996124\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229577 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lft8d\" (UniqueName: \"kubernetes.io/projected/4ad39e4e-4d41-443b-bfc7-a4ec7113664c-kube-api-access-lft8d\") pod \"machine-config-operator-67c9d58cbb-8cj7n\" (UID: \"4ad39e4e-4d41-443b-bfc7-a4ec7113664c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229616 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kf6g2\" (UniqueName: \"kubernetes.io/projected/e451454a-5a94-4535-823c-523ea6f6f7de-kube-api-access-kf6g2\") pod \"ingress-canary-nc9fk\" (UID: \"e451454a-5a94-4535-823c-523ea6f6f7de\") " pod="openshift-ingress-canary/ingress-canary-nc9fk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229650 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/839e8646-b712-4725-8456-806e52a3144c-tmpfs\") pod \"packageserver-7d4fc7d867-zbjgw\" (UID: \"839e8646-b712-4725-8456-806e52a3144c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229698 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/288efdc1-c138-42d5-9416-5c9d0faaa831-trusted-ca-bundle\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229715 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d76339a3-5850-4e27-be40-03180dc8e526-metrics-tls\") pod \"dns-operator-799b87ffcd-lvq9k\" (UID: \"d76339a3-5850-4e27-be40-03180dc8e526\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229739 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f09670d-b0a1-4fa2-9d30-7b82c260e38d-config\") pod \"console-operator-67c89758df-ns6rw\" (UID: \"9f09670d-b0a1-4fa2-9d30-7b82c260e38d\") " pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229754 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f09670d-b0a1-4fa2-9d30-7b82c260e38d-serving-cert\") pod \"console-operator-67c89758df-ns6rw\" (UID: \"9f09670d-b0a1-4fa2-9d30-7b82c260e38d\") " pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229774 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f09670d-b0a1-4fa2-9d30-7b82c260e38d-trusted-ca\") pod \"console-operator-67c89758df-ns6rw\" (UID: \"9f09670d-b0a1-4fa2-9d30-7b82c260e38d\") " pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229795 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/288efdc1-c138-42d5-9416-5c9d0faaa831-oauth-serving-cert\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229822 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b3bd69c-7b97-42bb-9f12-7d690416e91f-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-rb8jj\" (UID: \"1b3bd69c-7b97-42bb-9f12-7d690416e91f\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229847 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/839e8646-b712-4725-8456-806e52a3144c-apiservice-cert\") pod \"packageserver-7d4fc7d867-zbjgw\" (UID: \"839e8646-b712-4725-8456-806e52a3144c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229879 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/839e8646-b712-4725-8456-806e52a3144c-webhook-cert\") pod \"packageserver-7d4fc7d867-zbjgw\" (UID: \"839e8646-b712-4725-8456-806e52a3144c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229910 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b14632cd-c5f4-41b7-be2f-71d6f7f2c264-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-zfncw\" (UID: \"b14632cd-c5f4-41b7-be2f-71d6f7f2c264\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229953 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lpcz4\" (UniqueName: \"kubernetes.io/projected/b3398b97-1658-4344-afde-a15d309846c9-kube-api-access-lpcz4\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.229985 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e811bf67-7a6d-4279-bbff-b2cf02f66558-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230014 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nw9sz\" (UniqueName: \"kubernetes.io/projected/9f09670d-b0a1-4fa2-9d30-7b82c260e38d-kube-api-access-nw9sz\") pod \"console-operator-67c89758df-ns6rw\" (UID: \"9f09670d-b0a1-4fa2-9d30-7b82c260e38d\") " pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230043 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230071 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4ad39e4e-4d41-443b-bfc7-a4ec7113664c-images\") pod \"machine-config-operator-67c9d58cbb-8cj7n\" (UID: \"4ad39e4e-4d41-443b-bfc7-a4ec7113664c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230088 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/288efdc1-c138-42d5-9416-5c9d0faaa831-console-config\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230107 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fa9082e9-a8a6-433b-97ca-70128b99d6b7-certs\") pod \"machine-config-server-87k2l\" (UID: \"fa9082e9-a8a6-433b-97ca-70128b99d6b7\") " pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230128 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fa9082e9-a8a6-433b-97ca-70128b99d6b7-node-bootstrap-token\") pod \"machine-config-server-87k2l\" (UID: \"fa9082e9-a8a6-433b-97ca-70128b99d6b7\") " pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230146 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3a1a33e-2dab-43f6-8c34-6ac84e05eb03-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2xm5v\" (UID: \"b3a1a33e-2dab-43f6-8c34-6ac84e05eb03\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230151 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1185cd69-7c6a-46f0-acf1-64d587996124-config\") pod \"machine-approver-54c688565-t5442\" (UID: \"1185cd69-7c6a-46f0-acf1-64d587996124\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230171 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3398b97-1658-4344-afde-a15d309846c9-config-volume\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230189 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cx6cs\" (UniqueName: \"kubernetes.io/projected/b3a1a33e-2dab-43f6-8c34-6ac84e05eb03-kube-api-access-cx6cs\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2xm5v\" (UID: \"b3a1a33e-2dab-43f6-8c34-6ac84e05eb03\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230281 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/288efdc1-c138-42d5-9416-5c9d0faaa831-console-serving-cert\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230343 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gc8wm\" (UniqueName: \"kubernetes.io/projected/1b3bd69c-7b97-42bb-9f12-7d690416e91f-kube-api-access-gc8wm\") pod \"machine-config-controller-f9cdd68f7-rb8jj\" (UID: \"1b3bd69c-7b97-42bb-9f12-7d690416e91f\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230396 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ad39e4e-4d41-443b-bfc7-a4ec7113664c-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-8cj7n\" (UID: \"4ad39e4e-4d41-443b-bfc7-a4ec7113664c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230438 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a69d5905-85d8-49b8-ab54-15fc8f104c31-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230475 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf1e5da6-8866-4e4d-bafe-84bc0f76c41f-webhook-certs\") pod \"multus-admission-controller-69db94689b-wpz4s\" (UID: \"cf1e5da6-8866-4e4d-bafe-84bc0f76c41f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230537 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gtvh7\" (UniqueName: \"kubernetes.io/projected/fa9082e9-a8a6-433b-97ca-70128b99d6b7-kube-api-access-gtvh7\") pod \"machine-config-server-87k2l\" (UID: \"fa9082e9-a8a6-433b-97ca-70128b99d6b7\") " pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230577 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e811bf67-7a6d-4279-bbff-b2cf02f66558-tmpfs\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230647 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/288efdc1-c138-42d5-9416-5c9d0faaa831-service-ca\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230707 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g7n7n\" (UniqueName: \"kubernetes.io/projected/d76339a3-5850-4e27-be40-03180dc8e526-kube-api-access-g7n7n\") pod \"dns-operator-799b87ffcd-lvq9k\" (UID: \"d76339a3-5850-4e27-be40-03180dc8e526\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230744 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qnrkm\" (UniqueName: \"kubernetes.io/projected/1185cd69-7c6a-46f0-acf1-64d587996124-kube-api-access-qnrkm\") pod \"machine-approver-54c688565-t5442\" (UID: \"1185cd69-7c6a-46f0-acf1-64d587996124\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230802 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/288efdc1-c138-42d5-9416-5c9d0faaa831-console-oauth-config\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230869 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/288efdc1-c138-42d5-9416-5c9d0faaa831-oauth-serving-cert\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.230945 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/839e8646-b712-4725-8456-806e52a3144c-tmpfs\") pod \"packageserver-7d4fc7d867-zbjgw\" (UID: \"839e8646-b712-4725-8456-806e52a3144c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.231162 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:40.731150053 +0000 UTC m=+118.640069392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.231470 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1185cd69-7c6a-46f0-acf1-64d587996124-auth-proxy-config\") pod \"machine-approver-54c688565-t5442\" (UID: \"1185cd69-7c6a-46f0-acf1-64d587996124\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.232424 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b3bd69c-7b97-42bb-9f12-7d690416e91f-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-rb8jj\" (UID: \"1b3bd69c-7b97-42bb-9f12-7d690416e91f\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.232564 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b14632cd-c5f4-41b7-be2f-71d6f7f2c264-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-zfncw\" (UID: \"b14632cd-c5f4-41b7-be2f-71d6f7f2c264\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.233616 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.233874 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1185cd69-7c6a-46f0-acf1-64d587996124-machine-approver-tls\") pod \"machine-approver-54c688565-t5442\" (UID: \"1185cd69-7c6a-46f0-acf1-64d587996124\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.234419 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/288efdc1-c138-42d5-9416-5c9d0faaa831-trusted-ca-bundle\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.234842 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/288efdc1-c138-42d5-9416-5c9d0faaa831-console-config\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.235527 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b3bd69c-7b97-42bb-9f12-7d690416e91f-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-rb8jj\" (UID: \"1b3bd69c-7b97-42bb-9f12-7d690416e91f\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.235647 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e811bf67-7a6d-4279-bbff-b2cf02f66558-tmpfs\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.235976 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a69d5905-85d8-49b8-ab54-15fc8f104c31-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.236459 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ad39e4e-4d41-443b-bfc7-a4ec7113664c-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-8cj7n\" (UID: \"4ad39e4e-4d41-443b-bfc7-a4ec7113664c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.237013 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/288efdc1-c138-42d5-9416-5c9d0faaa831-service-ca\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.237197 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/288efdc1-c138-42d5-9416-5c9d0faaa831-console-oauth-config\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.237320 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/288efdc1-c138-42d5-9416-5c9d0faaa831-console-serving-cert\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.237913 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d76339a3-5850-4e27-be40-03180dc8e526-metrics-tls\") pod \"dns-operator-799b87ffcd-lvq9k\" (UID: \"d76339a3-5850-4e27-be40-03180dc8e526\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.238956 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e811bf67-7a6d-4279-bbff-b2cf02f66558-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.273094 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.293655 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.305288 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b14632cd-c5f4-41b7-be2f-71d6f7f2c264-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-zfncw\" (UID: \"b14632cd-c5f4-41b7-be2f-71d6f7f2c264\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.313265 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.318301 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b14632cd-c5f4-41b7-be2f-71d6f7f2c264-config\") pod \"openshift-controller-manager-operator-686468bdd5-zfncw\" (UID: \"b14632cd-c5f4-41b7-be2f-71d6f7f2c264\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.332695 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.332835 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:40.832813558 +0000 UTC m=+118.741732927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.332948 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.333050 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.333430 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:40.833412633 +0000 UTC m=+118.742332002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.353411 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.364695 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.364728 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.364821 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.373251 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.393027 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.404160 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f09670d-b0a1-4fa2-9d30-7b82c260e38d-serving-cert\") pod \"console-operator-67c89758df-ns6rw\" (UID: \"9f09670d-b0a1-4fa2-9d30-7b82c260e38d\") " pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.414068 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.435119 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.435283 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:40.935259524 +0000 UTC m=+118.844178873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.435763 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.436747 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:40.936721633 +0000 UTC m=+118.845641012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.440934 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.443990 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f09670d-b0a1-4fa2-9d30-7b82c260e38d-trusted-ca\") pod \"console-operator-67c89758df-ns6rw\" (UID: \"9f09670d-b0a1-4fa2-9d30-7b82c260e38d\") " pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.454563 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.463299 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f09670d-b0a1-4fa2-9d30-7b82c260e38d-config\") pod \"console-operator-67c89758df-ns6rw\" (UID: \"9f09670d-b0a1-4fa2-9d30-7b82c260e38d\") " pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.473734 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.493274 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.505800 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/839e8646-b712-4725-8456-806e52a3144c-webhook-cert\") pod \"packageserver-7d4fc7d867-zbjgw\" (UID: \"839e8646-b712-4725-8456-806e52a3144c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.507683 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/839e8646-b712-4725-8456-806e52a3144c-apiservice-cert\") pod \"packageserver-7d4fc7d867-zbjgw\" (UID: \"839e8646-b712-4725-8456-806e52a3144c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.513508 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.533827 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.536783 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.536861 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.036840908 +0000 UTC m=+118.945760257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.537513 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.537832 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.037824523 +0000 UTC m=+118.946743872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.553539 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.572867 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.579415 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfdd3fba-e428-46ea-a831-e53d949c342a-config\") pod \"service-ca-operator-5b9c976747-6np67\" (UID: \"dfdd3fba-e428-46ea-a831-e53d949c342a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.594413 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.608069 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfdd3fba-e428-46ea-a831-e53d949c342a-serving-cert\") pod \"service-ca-operator-5b9c976747-6np67\" (UID: \"dfdd3fba-e428-46ea-a831-e53d949c342a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.614089 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.632639 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.638604 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.638810 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.13878469 +0000 UTC m=+119.047704039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.639439 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.639825 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.139811858 +0000 UTC m=+119.048731227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.653212 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.666633 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5205d539-f164-46b4-858c-9ca958a1102a-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-xzm9l\" (UID: \"5205d539-f164-46b4-858c-9ca958a1102a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.683573 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.690240 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5205d539-f164-46b4-858c-9ca958a1102a-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-xzm9l\" (UID: \"5205d539-f164-46b4-858c-9ca958a1102a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.694294 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.713635 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.732840 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.740047 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/23eb49a3-e378-481a-932f-83ec71b22e6d-signing-key\") pod \"service-ca-74545575db-nsc2v\" (UID: \"23eb49a3-e378-481a-932f-83ec71b22e6d\") " pod="openshift-service-ca/service-ca-74545575db-nsc2v" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.741334 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.741451 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.241421452 +0000 UTC m=+119.150340811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.741950 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.742371 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.242346496 +0000 UTC m=+119.151265875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.754476 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.757099 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/23eb49a3-e378-481a-932f-83ec71b22e6d-signing-cabundle\") pod \"service-ca-74545575db-nsc2v\" (UID: \"23eb49a3-e378-481a-932f-83ec71b22e6d\") " pod="openshift-service-ca/service-ca-74545575db-nsc2v" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.773399 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.793021 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.813047 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.833404 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.843287 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.843923 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.343839998 +0000 UTC m=+119.252759377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.844573 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.845166 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.345136412 +0000 UTC m=+119.254055851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.846773 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/460f5edc-0e33-44ee-b8ad-41e51e22924a-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-kpn7g\" (UID: \"460f5edc-0e33-44ee-b8ad-41e51e22924a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.853852 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.872653 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.874050 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1-config\") pod \"kube-apiserver-operator-575994946d-csld6\" (UID: \"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.894113 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.901465 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1-serving-cert\") pod \"kube-apiserver-operator-575994946d-csld6\" (UID: \"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.914962 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.933912 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.947325 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.947563 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.447515947 +0000 UTC m=+119.356435306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.948161 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:40 crc kubenswrapper[5124]: E0126 00:10:40.948816 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.448806051 +0000 UTC m=+119.357725400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.952298 5124 request.go:752] "Waited before sending request" delay="1.003743143s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-kw8fx&limit=500&resourceVersion=0" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.954319 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.973656 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.978944 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-default-certificate\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:40 crc kubenswrapper[5124]: I0126 00:10:40.993684 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.004381 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-stats-auth\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.013961 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.018413 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-metrics-certs\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.034188 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.040830 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-service-ca-bundle\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.050083 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.050278 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.55024697 +0000 UTC m=+119.459166319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.051062 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.051528 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.551519884 +0000 UTC m=+119.460439233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.053873 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.074270 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.082125 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/80cd99f0-6ac5-4187-9bdd-79dde0e74a57-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-9qgdz\" (UID: \"80cd99f0-6ac5-4187-9bdd-79dde0e74a57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.093544 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.113511 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.115956 5124 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.116018 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-config podName:e47ad1f1-7281-4a86-bac9-bbaa37dfeab1 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.616000714 +0000 UTC m=+119.524920063 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-config") pod "openshift-kube-scheduler-operator-54f497555d-mbllj" (UID: "e47ad1f1-7281-4a86-bac9-bbaa37dfeab1") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.118343 5124 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.118408 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ec000458-4225-4aa1-b22e-244d7d137c9e-config podName:ec000458-4225-4aa1-b22e-244d7d137c9e nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.618391177 +0000 UTC m=+119.527310546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ec000458-4225-4aa1-b22e-244d7d137c9e-config") pod "kube-storage-version-migrator-operator-565b79b866-vfn25" (UID: "ec000458-4225-4aa1-b22e-244d7d137c9e") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.118408 5124 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.118489 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-config podName:8f8124ef-e842-4eaa-a6bb-54b67540b2ac nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.61847835 +0000 UTC m=+119.527397699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-config") pod "kube-controller-manager-operator-69d5f845f8-zbgtx" (UID: "8f8124ef-e842-4eaa-a6bb-54b67540b2ac") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.118695 5124 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.118890 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/973d580d-7e62-419e-be96-115733ca98bf-marketplace-trusted-ca podName:973d580d-7e62-419e-be96-115733ca98bf nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.61886291 +0000 UTC m=+119.527782269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/973d580d-7e62-419e-be96-115733ca98bf-marketplace-trusted-ca") pod "marketplace-operator-547dbd544d-5hwt4" (UID: "973d580d-7e62-419e-be96-115733ca98bf") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.120467 5124 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.120566 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/973d580d-7e62-419e-be96-115733ca98bf-marketplace-operator-metrics podName:973d580d-7e62-419e-be96-115733ca98bf nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.620552744 +0000 UTC m=+119.529472103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/973d580d-7e62-419e-be96-115733ca98bf-marketplace-operator-metrics") pod "marketplace-operator-547dbd544d-5hwt4" (UID: "973d580d-7e62-419e-be96-115733ca98bf") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.120659 5124 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.120760 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec000458-4225-4aa1-b22e-244d7d137c9e-serving-cert podName:ec000458-4225-4aa1-b22e-244d7d137c9e nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.620736179 +0000 UTC m=+119.529655528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ec000458-4225-4aa1-b22e-244d7d137c9e-serving-cert") pod "kube-storage-version-migrator-operator-565b79b866-vfn25" (UID: "ec000458-4225-4aa1-b22e-244d7d137c9e") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.120762 5124 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.120808 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a006121-cc9c-46f5-98db-14148f556b11-srv-cert podName:8a006121-cc9c-46f5-98db-14148f556b11 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.62079583 +0000 UTC m=+119.529715199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a006121-cc9c-46f5-98db-14148f556b11-srv-cert") pod "olm-operator-5cdf44d969-5tzb8" (UID: "8a006121-cc9c-46f5-98db-14148f556b11") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.121019 5124 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.121209 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-serving-cert podName:8f8124ef-e842-4eaa-a6bb-54b67540b2ac nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.621188782 +0000 UTC m=+119.530108141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-serving-cert") pod "kube-controller-manager-operator-69d5f845f8-zbgtx" (UID: "8f8124ef-e842-4eaa-a6bb-54b67540b2ac") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.121733 5124 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.121794 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-serving-cert podName:e47ad1f1-7281-4a86-bac9-bbaa37dfeab1 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.621785617 +0000 UTC m=+119.530704966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-serving-cert") pod "openshift-kube-scheduler-operator-54f497555d-mbllj" (UID: "e47ad1f1-7281-4a86-bac9-bbaa37dfeab1") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.134120 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.152562 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.153105 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.652958804 +0000 UTC m=+119.561878173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.153521 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.153607 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.154116 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.654095423 +0000 UTC m=+119.563014782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.173816 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.182221 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ad39e4e-4d41-443b-bfc7-a4ec7113664c-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-8cj7n\" (UID: \"4ad39e4e-4d41-443b-bfc7-a4ec7113664c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.192997 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.203026 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4ad39e4e-4d41-443b-bfc7-a4ec7113664c-images\") pod \"machine-config-operator-67c9d58cbb-8cj7n\" (UID: \"4ad39e4e-4d41-443b-bfc7-a4ec7113664c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.213454 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.228874 5124 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.228912 5124 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.228976 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e451454a-5a94-4535-823c-523ea6f6f7de-cert podName:e451454a-5a94-4535-823c-523ea6f6f7de nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.728956519 +0000 UTC m=+119.637875868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e451454a-5a94-4535-823c-523ea6f6f7de-cert") pod "ingress-canary-nc9fk" (UID: "e451454a-5a94-4535-823c-523ea6f6f7de") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.228994 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e811bf67-7a6d-4279-bbff-b2cf02f66558-srv-cert podName:e811bf67-7a6d-4279-bbff-b2cf02f66558 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.728987789 +0000 UTC m=+119.637907138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e811bf67-7a6d-4279-bbff-b2cf02f66558-srv-cert") pod "catalog-operator-75ff9f647d-sdxrl" (UID: "e811bf67-7a6d-4279-bbff-b2cf02f66558") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.229147 5124 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.229262 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a69d5905-85d8-49b8-ab54-15fc8f104c31-cni-sysctl-allowlist podName:a69d5905-85d8-49b8-ab54-15fc8f104c31 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.729243436 +0000 UTC m=+119.638162785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/a69d5905-85d8-49b8-ab54-15fc8f104c31-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-vp4mw" (UID: "a69d5905-85d8-49b8-ab54-15fc8f104c31") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.231124 5124 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.231220 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3398b97-1658-4344-afde-a15d309846c9-metrics-tls podName:b3398b97-1658-4344-afde-a15d309846c9 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.731200278 +0000 UTC m=+119.640119627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b3398b97-1658-4344-afde-a15d309846c9-metrics-tls") pod "dns-default-n64rh" (UID: "b3398b97-1658-4344-afde-a15d309846c9") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.233435 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.235402 5124 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.235472 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3a1a33e-2dab-43f6-8c34-6ac84e05eb03-control-plane-machine-set-operator-tls podName:b3a1a33e-2dab-43f6-8c34-6ac84e05eb03 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.73545095 +0000 UTC m=+119.644370299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/b3a1a33e-2dab-43f6-8c34-6ac84e05eb03-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-75ffdb6fcd-2xm5v" (UID: "b3a1a33e-2dab-43f6-8c34-6ac84e05eb03") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.236876 5124 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.236940 5124 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.236957 5124 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.237008 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9082e9-a8a6-433b-97ca-70128b99d6b7-node-bootstrap-token podName:fa9082e9-a8a6-433b-97ca-70128b99d6b7 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.736995932 +0000 UTC m=+119.645915281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/fa9082e9-a8a6-433b-97ca-70128b99d6b7-node-bootstrap-token") pod "machine-config-server-87k2l" (UID: "fa9082e9-a8a6-433b-97ca-70128b99d6b7") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.237025 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9082e9-a8a6-433b-97ca-70128b99d6b7-certs podName:fa9082e9-a8a6-433b-97ca-70128b99d6b7 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.737018012 +0000 UTC m=+119.645937361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/fa9082e9-a8a6-433b-97ca-70128b99d6b7-certs") pod "machine-config-server-87k2l" (UID: "fa9082e9-a8a6-433b-97ca-70128b99d6b7") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.237054 5124 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.237088 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3398b97-1658-4344-afde-a15d309846c9-config-volume podName:b3398b97-1658-4344-afde-a15d309846c9 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.737082154 +0000 UTC m=+119.646001503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b3398b97-1658-4344-afde-a15d309846c9-config-volume") pod "dns-default-n64rh" (UID: "b3398b97-1658-4344-afde-a15d309846c9") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.237113 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf1e5da6-8866-4e4d-bafe-84bc0f76c41f-webhook-certs podName:cf1e5da6-8866-4e4d-bafe-84bc0f76c41f nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.737107725 +0000 UTC m=+119.646027074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cf1e5da6-8866-4e4d-bafe-84bc0f76c41f-webhook-certs") pod "multus-admission-controller-69db94689b-wpz4s" (UID: "cf1e5da6-8866-4e4d-bafe-84bc0f76c41f") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.253360 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.254964 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.255221 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.755195624 +0000 UTC m=+119.664114983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.255759 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.256059 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.756048017 +0000 UTC m=+119.664967366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.273364 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.292746 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.313637 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.333429 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.353132 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.357458 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.357705 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.857660801 +0000 UTC m=+119.766580150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.358193 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.358611 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.858563625 +0000 UTC m=+119.767482984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.365057 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.374067 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.394394 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.413621 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.433096 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.453349 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.459177 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.459464 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.959441609 +0000 UTC m=+119.868360958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.459890 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.460371 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:41.960355104 +0000 UTC m=+119.869274463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.474190 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.493798 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.513652 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.533268 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.553728 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.561828 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.562002 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.061979639 +0000 UTC m=+119.970899008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.562177 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.562618 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.062571434 +0000 UTC m=+119.971490773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.585182 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.593871 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.613997 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.634521 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.654172 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.665521 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.666041 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/973d580d-7e62-419e-be96-115733ca98bf-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.666059 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.166028177 +0000 UTC m=+120.074947526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.666167 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.666200 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec000458-4225-4aa1-b22e-244d7d137c9e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-vfn25\" (UID: \"ec000458-4225-4aa1-b22e-244d7d137c9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.666234 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a006121-cc9c-46f5-98db-14148f556b11-srv-cert\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.666306 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.666426 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.666560 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-config\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.666633 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-config\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.666693 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec000458-4225-4aa1-b22e-244d7d137c9e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-vfn25\" (UID: \"ec000458-4225-4aa1-b22e-244d7d137c9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.666747 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/973d580d-7e62-419e-be96-115733ca98bf-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.667178 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.167151216 +0000 UTC m=+120.076070615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.667502 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-config\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.668353 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-config\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.668627 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec000458-4225-4aa1-b22e-244d7d137c9e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-vfn25\" (UID: \"ec000458-4225-4aa1-b22e-244d7d137c9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.671184 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/973d580d-7e62-419e-be96-115733ca98bf-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.671975 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.672864 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.673010 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.673314 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/973d580d-7e62-419e-be96-115733ca98bf-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.673998 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec000458-4225-4aa1-b22e-244d7d137c9e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-vfn25\" (UID: \"ec000458-4225-4aa1-b22e-244d7d137c9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.694750 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.700187 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a006121-cc9c-46f5-98db-14148f556b11-srv-cert\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.713669 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.733468 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.753478 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.768823 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.769023 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.268987787 +0000 UTC m=+120.177907176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.769510 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e811bf67-7a6d-4279-bbff-b2cf02f66558-srv-cert\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.769651 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e451454a-5a94-4535-823c-523ea6f6f7de-cert\") pod \"ingress-canary-nc9fk\" (UID: \"e451454a-5a94-4535-823c-523ea6f6f7de\") " pod="openshift-ingress-canary/ingress-canary-nc9fk" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.769774 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a69d5905-85d8-49b8-ab54-15fc8f104c31-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.769855 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3398b97-1658-4344-afde-a15d309846c9-metrics-tls\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.770114 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.770184 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fa9082e9-a8a6-433b-97ca-70128b99d6b7-certs\") pod \"machine-config-server-87k2l\" (UID: \"fa9082e9-a8a6-433b-97ca-70128b99d6b7\") " pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.770223 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fa9082e9-a8a6-433b-97ca-70128b99d6b7-node-bootstrap-token\") pod \"machine-config-server-87k2l\" (UID: \"fa9082e9-a8a6-433b-97ca-70128b99d6b7\") " pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.770434 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.270415865 +0000 UTC m=+120.179335214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.770484 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3a1a33e-2dab-43f6-8c34-6ac84e05eb03-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2xm5v\" (UID: \"b3a1a33e-2dab-43f6-8c34-6ac84e05eb03\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.770540 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3398b97-1658-4344-afde-a15d309846c9-config-volume\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.770612 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf1e5da6-8866-4e4d-bafe-84bc0f76c41f-webhook-certs\") pod \"multus-admission-controller-69db94689b-wpz4s\" (UID: \"cf1e5da6-8866-4e4d-bafe-84bc0f76c41f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.773140 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.773569 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e811bf67-7a6d-4279-bbff-b2cf02f66558-srv-cert\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.775415 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf1e5da6-8866-4e4d-bafe-84bc0f76c41f-webhook-certs\") pod \"multus-admission-controller-69db94689b-wpz4s\" (UID: \"cf1e5da6-8866-4e4d-bafe-84bc0f76c41f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.777229 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3a1a33e-2dab-43f6-8c34-6ac84e05eb03-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2xm5v\" (UID: \"b3a1a33e-2dab-43f6-8c34-6ac84e05eb03\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.783991 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e451454a-5a94-4535-823c-523ea6f6f7de-cert\") pod \"ingress-canary-nc9fk\" (UID: \"e451454a-5a94-4535-823c-523ea6f6f7de\") " pod="openshift-ingress-canary/ingress-canary-nc9fk" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.793000 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.813388 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.834837 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.853684 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.866341 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3398b97-1658-4344-afde-a15d309846c9-metrics-tls\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.871319 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.871671 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.371644518 +0000 UTC m=+120.280563887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.872480 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.873041 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.873249 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.373229091 +0000 UTC m=+120.282148450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.881785 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3398b97-1658-4344-afde-a15d309846c9-config-volume\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.893672 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.900322 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a69d5905-85d8-49b8-ab54-15fc8f104c31-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.912924 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.933830 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.953068 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.972333 5124 request.go:752] "Waited before sending request" delay="1.97216637s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.973579 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.974755 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.974841 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.474824534 +0000 UTC m=+120.383743883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.976260 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:41 crc kubenswrapper[5124]: E0126 00:10:41.976525 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.476514769 +0000 UTC m=+120.385434118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.987402 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fa9082e9-a8a6-433b-97ca-70128b99d6b7-node-bootstrap-token\") pod \"machine-config-server-87k2l\" (UID: \"fa9082e9-a8a6-433b-97ca-70128b99d6b7\") " pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:41 crc kubenswrapper[5124]: I0126 00:10:41.994318 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.014460 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.025509 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fa9082e9-a8a6-433b-97ca-70128b99d6b7-certs\") pod \"machine-config-server-87k2l\" (UID: \"fa9082e9-a8a6-433b-97ca-70128b99d6b7\") " pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.054119 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x9k8\" (UniqueName: \"kubernetes.io/projected/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-kube-api-access-8x9k8\") pod \"oauth-openshift-66458b6674-v5jrb\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.079369 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.080317 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.580297081 +0000 UTC m=+120.489216430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.097301 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/670e3869-615d-43d1-8b6a-e0c80cebaab9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.114129 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wj5b\" (UniqueName: \"kubernetes.io/projected/f3b6839d-b688-438b-bf37-fa1f421afc27-kube-api-access-8wj5b\") pod \"openshift-config-operator-5777786469-lxzd9\" (UID: \"f3b6839d-b688-438b-bf37-fa1f421afc27\") " pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.131188 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwfv8\" (UniqueName: \"kubernetes.io/projected/670e3869-615d-43d1-8b6a-e0c80cebaab9-kube-api-access-cwfv8\") pod \"cluster-image-registry-operator-86c45576b9-vq8mw\" (UID: \"670e3869-615d-43d1-8b6a-e0c80cebaab9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.134267 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmp97\" (UniqueName: \"kubernetes.io/projected/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-kube-api-access-zmp97\") pod \"controller-manager-65b6cccf98-5cjkn\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.146263 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f8124ef-e842-4eaa-a6bb-54b67540b2ac-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-zbgtx\" (UID: \"8f8124ef-e842-4eaa-a6bb-54b67540b2ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.170257 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktt45\" (UniqueName: \"kubernetes.io/projected/2e062989-8ba6-44a5-8f95-e1958da237ad-kube-api-access-ktt45\") pod \"csi-hostpathplugin-kwjfc\" (UID: \"2e062989-8ba6-44a5-8f95-e1958da237ad\") " pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.181751 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.182121 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.68210582 +0000 UTC m=+120.591025159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.185962 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt7vt\" (UniqueName: \"kubernetes.io/projected/c2cd8439-aeb3-4321-9842-11b3cbb37b0b-kube-api-access-wt7vt\") pod \"router-default-68cf44c8b8-9jvql\" (UID: \"c2cd8439-aeb3-4321-9842-11b3cbb37b0b\") " pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.199820 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.204903 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.206175 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s7hk\" (UniqueName: \"kubernetes.io/projected/a219f23e-815a-42e8-82a6-941d1624c7d7-kube-api-access-8s7hk\") pod \"downloads-747b44746d-vcw8h\" (UID: \"a219f23e-815a-42e8-82a6-941d1624c7d7\") " pod="openshift-console/downloads-747b44746d-vcw8h" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.214228 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:42 crc kubenswrapper[5124]: W0126 00:10:42.220661 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2cd8439_aeb3_4321_9842_11b3cbb37b0b.slice/crio-fb63d941b2ef6734d377574666c556a062307b99daf13bdb4f30ac8a94babff8 WatchSource:0}: Error finding container fb63d941b2ef6734d377574666c556a062307b99daf13bdb4f30ac8a94babff8: Status 404 returned error can't find the container with id fb63d941b2ef6734d377574666c556a062307b99daf13bdb4f30ac8a94babff8 Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.229737 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjs99\" (UniqueName: \"kubernetes.io/projected/23eb49a3-e378-481a-932f-83ec71b22e6d-kube-api-access-hjs99\") pod \"service-ca-74545575db-nsc2v\" (UID: \"23eb49a3-e378-481a-932f-83ec71b22e6d\") " pod="openshift-service-ca/service-ca-74545575db-nsc2v" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.249413 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhts4\" (UniqueName: \"kubernetes.io/projected/b9496837-38dd-4e08-bf40-9a191112e42a-kube-api-access-rhts4\") pod \"machine-api-operator-755bb95488-6629f\" (UID: \"b9496837-38dd-4e08-bf40-9a191112e42a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.267412 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwdmt\" (UniqueName: \"kubernetes.io/projected/2046c412-f2fc-4d3e-97c7-fa57c6683752-kube-api-access-jwdmt\") pod \"authentication-operator-7f5c659b84-qdvls\" (UID: \"2046c412-f2fc-4d3e-97c7-fa57c6683752\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.271234 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.282960 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.283087 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.783069897 +0000 UTC m=+120.691989246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.283211 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.283238 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.283381 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.283418 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.283451 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.283725 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.783708835 +0000 UTC m=+120.692628184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.288865 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5205d539-f164-46b4-858c-9ca958a1102a-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-xzm9l\" (UID: \"5205d539-f164-46b4-858c-9ca958a1102a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.347980 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-bound-sa-token\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.385062 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.385277 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.885241257 +0000 UTC m=+120.794160626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.385463 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.386002 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.885981216 +0000 UTC m=+120.794900565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.486906 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.487126 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.987094587 +0000 UTC m=+120.896013936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.487487 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs\") pod \"network-metrics-daemon-sctbw\" (UID: \"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\") " pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.487574 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.487970 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.987957639 +0000 UTC m=+120.896876988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.570241 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkh5d\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-kube-api-access-wkh5d\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.589568 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.591505 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:43.091487154 +0000 UTC m=+121.000406493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.616181 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx"] Jan 26 00:10:42 crc kubenswrapper[5124]: W0126 00:10:42.629336 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f8124ef_e842_4eaa_a6bb_54b67540b2ac.slice/crio-23e00f5fe6945ae09c38171e321ab17c5cfd09ebb6a7e7e442688f6e7d1d053b WatchSource:0}: Error finding container 23e00f5fe6945ae09c38171e321ab17c5cfd09ebb6a7e7e442688f6e7d1d053b: Status 404 returned error can't find the container with id 23e00f5fe6945ae09c38171e321ab17c5cfd09ebb6a7e7e442688f6e7d1d053b Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.629722 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trhq6\" (UniqueName: \"kubernetes.io/projected/036651d1-0c52-4454-8385-bf3f84e19378-kube-api-access-trhq6\") pod \"image-pruner-29489760-dm2tt\" (UID: \"036651d1-0c52-4454-8385-bf3f84e19378\") " pod="openshift-image-registry/image-pruner-29489760-dm2tt" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.641781 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-lxzd9"] Jan 26 00:10:42 crc kubenswrapper[5124]: W0126 00:10:42.648140 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3b6839d_b688_438b_bf37_fa1f421afc27.slice/crio-19d745ac339fb885ffe02c650d26a5b6275db40719d1d4f121ca0117bac23bb5 WatchSource:0}: Error finding container 19d745ac339fb885ffe02c650d26a5b6275db40719d1d4f121ca0117bac23bb5: Status 404 returned error can't find the container with id 19d745ac339fb885ffe02c650d26a5b6275db40719d1d4f121ca0117bac23bb5 Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.660341 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5cjkn"] Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.696145 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.696466 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:43.196450088 +0000 UTC m=+121.105369437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.731901 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7bpv\" (UniqueName: \"kubernetes.io/projected/288efdc1-c138-42d5-9416-5c9d0faaa831-kube-api-access-d7bpv\") pod \"console-64d44f6ddf-b7nfk\" (UID: \"288efdc1-c138-42d5-9416-5c9d0faaa831\") " pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.750252 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxlwg\" (UniqueName: \"kubernetes.io/projected/a69d5905-85d8-49b8-ab54-15fc8f104c31-kube-api-access-fxlwg\") pod \"cni-sysctl-allowlist-ds-vp4mw\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.791403 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqtl7\" (UniqueName: \"kubernetes.io/projected/cf1e5da6-8866-4e4d-bafe-84bc0f76c41f-kube-api-access-jqtl7\") pod \"multus-admission-controller-69db94689b-wpz4s\" (UID: \"cf1e5da6-8866-4e4d-bafe-84bc0f76c41f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.805119 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.805353 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:43.305320415 +0000 UTC m=+121.214239784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.805954 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.806283 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:43.306265859 +0000 UTC m=+121.215185198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.830700 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lft8d\" (UniqueName: \"kubernetes.io/projected/4ad39e4e-4d41-443b-bfc7-a4ec7113664c-kube-api-access-lft8d\") pod \"machine-config-operator-67c9d58cbb-8cj7n\" (UID: \"4ad39e4e-4d41-443b-bfc7-a4ec7113664c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.855761 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" event={"ID":"8f8124ef-e842-4eaa-a6bb-54b67540b2ac","Type":"ContainerStarted","Data":"23e00f5fe6945ae09c38171e321ab17c5cfd09ebb6a7e7e442688f6e7d1d053b"} Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.857122 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" event={"ID":"c2cd8439-aeb3-4321-9842-11b3cbb37b0b","Type":"ContainerStarted","Data":"f5d1d195f0841e54fd1105d19c22f8395823168e9a46d2177f4087a6c290e405"} Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.857168 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" event={"ID":"c2cd8439-aeb3-4321-9842-11b3cbb37b0b","Type":"ContainerStarted","Data":"fb63d941b2ef6734d377574666c556a062307b99daf13bdb4f30ac8a94babff8"} Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.858390 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" event={"ID":"26da0b98-2814-44cd-b28b-a1b2ef0ee88e","Type":"ContainerStarted","Data":"fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635"} Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.858432 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" event={"ID":"26da0b98-2814-44cd-b28b-a1b2ef0ee88e","Type":"ContainerStarted","Data":"d77be7a904260d259be8993948dd0a5a7a04c32d8b2eb50b69eb6adaf76758e7"} Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.858547 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.860579 5124 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-5cjkn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.860655 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" podUID="26da0b98-2814-44cd-b28b-a1b2ef0ee88e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.860724 5124 generic.go:358] "Generic (PLEG): container finished" podID="f3b6839d-b688-438b-bf37-fa1f421afc27" containerID="6be0d091366293eec91c4abcf84c3ba8a2eb3bc08034bbe821c5256ce8c10128" exitCode=0 Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.860768 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" event={"ID":"f3b6839d-b688-438b-bf37-fa1f421afc27","Type":"ContainerDied","Data":"6be0d091366293eec91c4abcf84c3ba8a2eb3bc08034bbe821c5256ce8c10128"} Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.860789 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" event={"ID":"f3b6839d-b688-438b-bf37-fa1f421afc27","Type":"ContainerStarted","Data":"19d745ac339fb885ffe02c650d26a5b6275db40719d1d4f121ca0117bac23bb5"} Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.867027 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpcz4\" (UniqueName: \"kubernetes.io/projected/b3398b97-1658-4344-afde-a15d309846c9-kube-api-access-lpcz4\") pod \"dns-default-n64rh\" (UID: \"b3398b97-1658-4344-afde-a15d309846c9\") " pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.887807 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtvh7\" (UniqueName: \"kubernetes.io/projected/fa9082e9-a8a6-433b-97ca-70128b99d6b7-kube-api-access-gtvh7\") pod \"machine-config-server-87k2l\" (UID: \"fa9082e9-a8a6-433b-97ca-70128b99d6b7\") " pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.907127 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.907534 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc8wm\" (UniqueName: \"kubernetes.io/projected/1b3bd69c-7b97-42bb-9f12-7d690416e91f-kube-api-access-gc8wm\") pod \"machine-config-controller-f9cdd68f7-rb8jj\" (UID: \"1b3bd69c-7b97-42bb-9f12-7d690416e91f\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" Jan 26 00:10:42 crc kubenswrapper[5124]: E0126 00:10:42.907769 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:43.40775061 +0000 UTC m=+121.316669959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.946040 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:42 crc kubenswrapper[5124]: W0126 00:10:42.958196 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda69d5905_85d8_49b8_ab54_15fc8f104c31.slice/crio-9ee7c537f0f7b0f4d50b6ab82b73cce7de07712da1cc09804a170281d899f9b9 WatchSource:0}: Error finding container 9ee7c537f0f7b0f4d50b6ab82b73cce7de07712da1cc09804a170281d899f9b9: Status 404 returned error can't find the container with id 9ee7c537f0f7b0f4d50b6ab82b73cce7de07712da1cc09804a170281d899f9b9 Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.974797 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx6cs\" (UniqueName: \"kubernetes.io/projected/b3a1a33e-2dab-43f6-8c34-6ac84e05eb03-kube-api-access-cx6cs\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2xm5v\" (UID: \"b3a1a33e-2dab-43f6-8c34-6ac84e05eb03\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.993031 5124 request.go:752] "Waited before sending request" delay="2.627998859s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0" Jan 26 00:10:42 crc kubenswrapper[5124]: I0126 00:10:42.994499 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.008760 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.012956 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:43.512938339 +0000 UTC m=+121.421857688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.018424 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.024686 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.033865 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.051789 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.053507 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.064312 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.064948 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.073117 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.083661 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.094019 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.095081 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.100925 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b-metrics-certs\") pod \"network-metrics-daemon-sctbw\" (UID: \"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b\") " pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.104874 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.110080 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.110487 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:43.610467275 +0000 UTC m=+121.519386624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.114759 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.133556 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.134741 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.153482 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.168744 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e47ad1f1-7281-4a86-bac9-bbaa37dfeab1-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-mbllj\" (UID: \"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.174474 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.175617 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.179004 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sctbw" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.199295 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.201847 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.206801 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:43 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:43 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:43 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.206848 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.216614 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.217538 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:43.717525584 +0000 UTC m=+121.626444933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.219421 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.221811 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.234825 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.239722 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-vcw8h" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.253815 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.283016 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.293255 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.320757 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.321439 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.321665 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:43.821637034 +0000 UTC m=+121.730556373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.321783 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.322282 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:43.822273711 +0000 UTC m=+121.731193060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.333127 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1-kube-api-access\") pod \"kube-apiserver-operator-575994946d-csld6\" (UID: \"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.336862 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.355771 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.355812 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-nsc2v" Jan 26 00:10:43 crc kubenswrapper[5124]: W0126 00:10:43.381198 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-87840cf2cf70750d007a08c31b1d377bb7f09aa62069dac4f5f6c34b0312dde3 WatchSource:0}: Error finding container 87840cf2cf70750d007a08c31b1d377bb7f09aa62069dac4f5f6c34b0312dde3: Status 404 returned error can't find the container with id 87840cf2cf70750d007a08c31b1d377bb7f09aa62069dac4f5f6c34b0312dde3 Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.381940 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw"] Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.384682 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: W0126 00:10:43.388890 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-a1d7e5e4524e2e77a254ef26abfb3edad70e6e25323d23f7c92822c07f304ac3 WatchSource:0}: Error finding container a1d7e5e4524e2e77a254ef26abfb3edad70e6e25323d23f7c92822c07f304ac3: Status 404 returned error can't find the container with id a1d7e5e4524e2e77a254ef26abfb3edad70e6e25323d23f7c92822c07f304ac3 Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.394064 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.413713 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.418502 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sctbw"] Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.421018 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.422711 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.424552 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:43.924527292 +0000 UTC m=+121.833446641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.437280 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: W0126 00:10:43.439353 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08aecd79_a3de_4a82_a0bb_2a1edf3d8c0b.slice/crio-94b67c883717e20ee147beb560df31c03868269c6b29796d90fbf6fb339c7e5a WatchSource:0}: Error finding container 94b67c883717e20ee147beb560df31c03868269c6b29796d90fbf6fb339c7e5a: Status 404 returned error can't find the container with id 94b67c883717e20ee147beb560df31c03868269c6b29796d90fbf6fb339c7e5a Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.452910 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.462423 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.474609 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.485372 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-v5jrb"] Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.494187 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.517648 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.525170 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.525558 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.025545291 +0000 UTC m=+121.934464640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.534163 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.557275 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-vcw8h"] Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.557442 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.573239 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.595103 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.604994 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.614187 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.623077 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-dm2tt" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.625878 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.626019 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.125996214 +0000 UTC m=+122.034915563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.626451 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.626996 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.12697352 +0000 UTC m=+122.035892889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.636387 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kwjfc"] Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.636668 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.648078 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-nsc2v"] Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.654345 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.659231 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.672993 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.694809 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.704512 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.713124 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.719847 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-87k2l" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.723781 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-6629f"] Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.727076 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.727535 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.227519916 +0000 UTC m=+122.136439265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: W0126 00:10:43.743235 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9496837_38dd_4e08_bf40_9a191112e42a.slice/crio-b1e610ee4e9e634909fd8cc2a3874be3b83f2521d0e6f5f9da22a3eaf9b496d9 WatchSource:0}: Error finding container b1e610ee4e9e634909fd8cc2a3874be3b83f2521d0e6f5f9da22a3eaf9b496d9: Status 404 returned error can't find the container with id b1e610ee4e9e634909fd8cc2a3874be3b83f2521d0e6f5f9da22a3eaf9b496d9 Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.753387 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.756092 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.775907 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls"] Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.778738 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.786644 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" Jan 26 00:10:43 crc kubenswrapper[5124]: W0126 00:10:43.787663 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2046c412_f2fc_4d3e_97c7_fa57c6683752.slice/crio-6fb01dfb6e8034d6991b846bd17c9aaa6b72550996b35203780eb21574018da4 WatchSource:0}: Error finding container 6fb01dfb6e8034d6991b846bd17c9aaa6b72550996b35203780eb21574018da4: Status 404 returned error can't find the container with id 6fb01dfb6e8034d6991b846bd17c9aaa6b72550996b35203780eb21574018da4 Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.793531 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.810554 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7cvw\" (UniqueName: \"kubernetes.io/projected/acdc983c-4d4e-4a1e-82a3-a137fe39882a-kube-api-access-z7cvw\") pod \"route-controller-manager-776cdc94d6-f6l2j\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.813496 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.822067 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsxbj\" (UniqueName: \"kubernetes.io/projected/6b5e4a3d-13f4-42c6-9adb-30a826411994-kube-api-access-rsxbj\") pod \"etcd-operator-69b85846b6-sv2rt\" (UID: \"6b5e4a3d-13f4-42c6-9adb-30a826411994\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.828272 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.828565 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.328554005 +0000 UTC m=+122.237473354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.835497 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.842156 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.857956 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.870497 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fccv\" (UniqueName: \"kubernetes.io/projected/ec000458-4225-4aa1-b22e-244d7d137c9e-kube-api-access-2fccv\") pod \"kube-storage-version-migrator-operator-565b79b866-vfn25\" (UID: \"ec000458-4225-4aa1-b22e-244d7d137c9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.874653 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.874892 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" event={"ID":"2e062989-8ba6-44a5-8f95-e1958da237ad","Type":"ContainerStarted","Data":"9193ca8ecc50cfc0c4e4372628b75bea18663e87db97f4dd30b0682fc7799ba2"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.876138 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" event={"ID":"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f","Type":"ContainerStarted","Data":"d1d7d8d9f9479e246d68b7bc53df457d47531d1300caa96cf2e7bca02853a139"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.882660 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"a45852d292d94390b0fd6b541a608cc7095df42ed29a1645784410b2fdd9a5c9"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.882702 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"87840cf2cf70750d007a08c31b1d377bb7f09aa62069dac4f5f6c34b0312dde3"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.886578 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" event={"ID":"f3b6839d-b688-438b-bf37-fa1f421afc27","Type":"ContainerStarted","Data":"076dac9da81078ca6dc210ced6c83d1192ac2e4ab48a79f47177027d41fa9966"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.886843 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.888036 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8h4t\" (UniqueName: \"kubernetes.io/projected/5205d539-f164-46b4-858c-9ca958a1102a-kube-api-access-d8h4t\") pod \"ingress-operator-6b9cb4dbcf-xzm9l\" (UID: \"5205d539-f164-46b4-858c-9ca958a1102a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.890518 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" event={"ID":"8f8124ef-e842-4eaa-a6bb-54b67540b2ac","Type":"ContainerStarted","Data":"97a6f22a89820f79987b16bdd1dacde265c454cb80045e2cbdfd64cb249f45b7"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.899764 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" event={"ID":"2046c412-f2fc-4d3e-97c7-fa57c6683752","Type":"ContainerStarted","Data":"6fb01dfb6e8034d6991b846bd17c9aaa6b72550996b35203780eb21574018da4"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.900219 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.901243 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" event={"ID":"b9496837-38dd-4e08-bf40-9a191112e42a","Type":"ContainerStarted","Data":"b1e610ee4e9e634909fd8cc2a3874be3b83f2521d0e6f5f9da22a3eaf9b496d9"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.903544 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" event={"ID":"670e3869-615d-43d1-8b6a-e0c80cebaab9","Type":"ContainerStarted","Data":"551bd776d6b0844b33e81d0ee8d3adaba1cb369dcb55f4c5db77889cc61e4d2e"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.903571 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" event={"ID":"670e3869-615d-43d1-8b6a-e0c80cebaab9","Type":"ContainerStarted","Data":"87eb3c2d41737563dd8fab69645b55d00818a6f8929b8ac7a0e7464fb933336d"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.909974 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" event={"ID":"a69d5905-85d8-49b8-ab54-15fc8f104c31","Type":"ContainerStarted","Data":"589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.910223 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" event={"ID":"a69d5905-85d8-49b8-ab54-15fc8f104c31","Type":"ContainerStarted","Data":"9ee7c537f0f7b0f4d50b6ab82b73cce7de07712da1cc09804a170281d899f9b9"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.910840 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.911660 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qswl\" (UniqueName: \"kubernetes.io/projected/dfdd3fba-e428-46ea-a831-e53d949c342a-kube-api-access-4qswl\") pod \"service-ca-operator-5b9c976747-6np67\" (UID: \"dfdd3fba-e428-46ea-a831-e53d949c342a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.913956 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.929150 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sctbw" event={"ID":"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b","Type":"ContainerStarted","Data":"a79fe6312c40c7ce5e0c2cdab2f72db4f2a143e4e21ad84e57f5b12f0a9e95a9"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.929196 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sctbw" event={"ID":"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b","Type":"ContainerStarted","Data":"94b67c883717e20ee147beb560df31c03868269c6b29796d90fbf6fb339c7e5a"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.934058 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.934119 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7xrk\" (UniqueName: \"kubernetes.io/projected/e93a2f69-37f1-47bc-b659-8684acf34de3-kube-api-access-w7xrk\") pod \"openshift-apiserver-operator-846cbfc458-jkc7k\" (UID: \"e93a2f69-37f1-47bc-b659-8684acf34de3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.934235 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.434214726 +0000 UTC m=+122.343134075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.934473 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:43 crc kubenswrapper[5124]: E0126 00:10:43.934855 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.434846153 +0000 UTC m=+122.343765502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.936384 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.951749 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l6sb\" (UniqueName: \"kubernetes.io/projected/e811bf67-7a6d-4279-bbff-b2cf02f66558-kube-api-access-2l6sb\") pod \"catalog-operator-75ff9f647d-sdxrl\" (UID: \"e811bf67-7a6d-4279-bbff-b2cf02f66558\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.952825 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r95zt\" (UniqueName: \"kubernetes.io/projected/460f5edc-0e33-44ee-b8ad-41e51e22924a-kube-api-access-r95zt\") pod \"package-server-manager-77f986bd66-kpn7g\" (UID: \"460f5edc-0e33-44ee-b8ad-41e51e22924a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.955337 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.955482 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5q88\" (UniqueName: \"kubernetes.io/projected/2c16907d-1bcd-420c-879d-65a0552e69d3-kube-api-access-z5q88\") pod \"collect-profiles-29489760-ldpxs\" (UID: \"2c16907d-1bcd-420c-879d-65a0552e69d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.955481 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"525b9425aebfaac5fee15e513f822d0f012b1ba7953c7a5ff5f02cc9ef3c1a4a"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.955654 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"f12966e2a2b9b4825e044693d57dec4aecd1c779ab765c487ccab97722024ab4"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.959184 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.962408 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5wpm\" (UniqueName: \"kubernetes.io/projected/8a006121-cc9c-46f5-98db-14148f556b11-kube-api-access-d5wpm\") pod \"olm-operator-5cdf44d969-5tzb8\" (UID: \"8a006121-cc9c-46f5-98db-14148f556b11\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.963600 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mppv7\" (UniqueName: \"kubernetes.io/projected/839e8646-b712-4725-8456-806e52a3144c-kube-api-access-mppv7\") pod \"packageserver-7d4fc7d867-zbjgw\" (UID: \"839e8646-b712-4725-8456-806e52a3144c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.964029 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-nsc2v" event={"ID":"23eb49a3-e378-481a-932f-83ec71b22e6d","Type":"ContainerStarted","Data":"d05f33281001d2a88cd0bc11aa611c9920e17b8aaf781653fa9605f5346fd07a"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.966015 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-vcw8h" event={"ID":"a219f23e-815a-42e8-82a6-941d1624c7d7","Type":"ContainerStarted","Data":"8b733089b27373419a34b091b513de519eb077788914bb5d1b1dd779d708365e"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.982519 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.984365 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"75a2446c971621272afc91535c25cd8444ad0b1d2263d1ba58fa014321790909"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.984404 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"a1d7e5e4524e2e77a254ef26abfb3edad70e6e25323d23f7c92822c07f304ac3"} Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.984701 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.993539 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-clbcp\" (UniqueName: \"kubernetes.io/projected/973d580d-7e62-419e-be96-115733ca98bf-kube-api-access-clbcp\") pod \"marketplace-operator-547dbd544d-5hwt4\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:43 crc kubenswrapper[5124]: I0126 00:10:43.996072 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.007403 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.025059 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.027468 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29489760-dm2tt"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.036680 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.037314 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:44 crc kubenswrapper[5124]: E0126 00:10:44.038421 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.538406169 +0000 UTC m=+122.447325518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.038529 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.057702 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.061348 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9c7t\" (UniqueName: \"kubernetes.io/projected/c696bafb-e286-4dc1-8edd-860c8c0564da-kube-api-access-k9c7t\") pod \"apiserver-9ddfb9f55-s87zt\" (UID: \"c696bafb-e286-4dc1-8edd-860c8c0564da\") " pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.065299 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-b7nfk"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.073296 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.074638 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrs9s\" (UniqueName: \"kubernetes.io/projected/b14632cd-c5f4-41b7-be2f-71d6f7f2c264-kube-api-access-rrs9s\") pod \"openshift-controller-manager-operator-686468bdd5-zfncw\" (UID: \"b14632cd-c5f4-41b7-be2f-71d6f7f2c264\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.079242 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzpmz\" (UniqueName: \"kubernetes.io/projected/498973e3-482d-4a19-9224-c3e67efc2a20-kube-api-access-gzpmz\") pod \"apiserver-8596bd845d-fpklc\" (UID: \"498973e3-482d-4a19-9224-c3e67efc2a20\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.085169 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6vhx\" (UniqueName: \"kubernetes.io/projected/27a594f4-28ad-49d0-8ab7-f0c0ff14d65c-kube-api-access-z6vhx\") pod \"migrator-866fcbc849-fqxww\" (UID: \"27a594f4-28ad-49d0-8ab7-f0c0ff14d65c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fqxww" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.085752 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n64rh"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.102818 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.126129 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4qpp\" (UniqueName: \"kubernetes.io/projected/80cd99f0-6ac5-4187-9bdd-79dde0e74a57-kube-api-access-b4qpp\") pod \"cluster-samples-operator-6b564684c8-9qgdz\" (UID: \"80cd99f0-6ac5-4187-9bdd-79dde0e74a57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.129752 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.138311 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:44 crc kubenswrapper[5124]: W0126 00:10:44.138553 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod036651d1_0c52_4454_8385_bf3f84e19378.slice/crio-66d5ec531b195f62c6975c8f1db431517f68279cedce5911ec20021b748018b0 WatchSource:0}: Error finding container 66d5ec531b195f62c6975c8f1db431517f68279cedce5911ec20021b748018b0: Status 404 returned error can't find the container with id 66d5ec531b195f62c6975c8f1db431517f68279cedce5911ec20021b748018b0 Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.141764 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf6g2\" (UniqueName: \"kubernetes.io/projected/e451454a-5a94-4535-823c-523ea6f6f7de-kube-api-access-kf6g2\") pod \"ingress-canary-nc9fk\" (UID: \"e451454a-5a94-4535-823c-523ea6f6f7de\") " pod="openshift-ingress-canary/ingress-canary-nc9fk" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.152373 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:44 crc kubenswrapper[5124]: E0126 00:10:44.153901 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.65388173 +0000 UTC m=+122.562801079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.158624 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw9sz\" (UniqueName: \"kubernetes.io/projected/9f09670d-b0a1-4fa2-9d30-7b82c260e38d-kube-api-access-nw9sz\") pod \"console-operator-67c89758df-ns6rw\" (UID: \"9f09670d-b0a1-4fa2-9d30-7b82c260e38d\") " pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.158963 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.175706 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7n7n\" (UniqueName: \"kubernetes.io/projected/d76339a3-5850-4e27-be40-03180dc8e526-kube-api-access-g7n7n\") pod \"dns-operator-799b87ffcd-lvq9k\" (UID: \"d76339a3-5850-4e27-be40-03180dc8e526\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.178341 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.191165 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnrkm\" (UniqueName: \"kubernetes.io/projected/1185cd69-7c6a-46f0-acf1-64d587996124-kube-api-access-qnrkm\") pod \"machine-approver-54c688565-t5442\" (UID: \"1185cd69-7c6a-46f0-acf1-64d587996124\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.215820 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:44 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:44 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:44 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.215883 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.216003 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.219628 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.237038 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.246335 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.256197 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:44 crc kubenswrapper[5124]: E0126 00:10:44.256337 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.756319897 +0000 UTC m=+122.665239246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.256530 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:44 crc kubenswrapper[5124]: E0126 00:10:44.256851 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.7568436 +0000 UTC m=+122.665762949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.257430 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.258003 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-wpz4s"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.264239 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.279458 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.286694 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.286724 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.286838 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.287627 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.296245 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.301280 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.313145 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.317900 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.324796 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.340923 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.344750 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.357099 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.359098 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:44 crc kubenswrapper[5124]: E0126 00:10:44.359457 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.8594408 +0000 UTC m=+122.768360149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:44 crc kubenswrapper[5124]: W0126 00:10:44.362267 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3a1a33e_2dab_43f6_8c34_6ac84e05eb03.slice/crio-80d5881bfe5a6e24139eaa1014de3f60521e4d4e81b2ff765797e1df3aed3ba5 WatchSource:0}: Error finding container 80d5881bfe5a6e24139eaa1014de3f60521e4d4e81b2ff765797e1df3aed3ba5: Status 404 returned error can't find the container with id 80d5881bfe5a6e24139eaa1014de3f60521e4d4e81b2ff765797e1df3aed3ba5 Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.363090 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:44 crc kubenswrapper[5124]: W0126 00:10:44.363622 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf1e5da6_8866_4e4d_bafe_84bc0f76c41f.slice/crio-92a14d229abe1ec8a669dd42c5d38c93c2d9368cb22003ce1fb82a092b3b08fa WatchSource:0}: Error finding container 92a14d229abe1ec8a669dd42c5d38c93c2d9368cb22003ce1fb82a092b3b08fa: Status 404 returned error can't find the container with id 92a14d229abe1ec8a669dd42c5d38c93c2d9368cb22003ce1fb82a092b3b08fa Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.379882 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.380086 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.398382 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.406260 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.423549 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.427697 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.457352 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.461388 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:44 crc kubenswrapper[5124]: E0126 00:10:44.461727 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:44.961711772 +0000 UTC m=+122.870631121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.462749 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.468323 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.473283 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.480539 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.490016 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.495162 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.495423 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.531696 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.533360 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.539716 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fqxww" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.541182 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nc9fk" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.548430 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.569614 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.573577 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:44 crc kubenswrapper[5124]: E0126 00:10:44.574622 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.074600965 +0000 UTC m=+122.983520314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.575116 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.579242 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.597889 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.599812 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" Jan 26 00:10:44 crc kubenswrapper[5124]: W0126 00:10:44.626032 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad39e4e_4d41_443b_bfc7_a4ec7113664c.slice/crio-00fb941a0d3588a8f8710fd65bd50f8122777e2a9dea475f82f4512764400e84 WatchSource:0}: Error finding container 00fb941a0d3588a8f8710fd65bd50f8122777e2a9dea475f82f4512764400e84: Status 404 returned error can't find the container with id 00fb941a0d3588a8f8710fd65bd50f8122777e2a9dea475f82f4512764400e84 Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.655057 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.657221 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.676013 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:44 crc kubenswrapper[5124]: E0126 00:10:44.676331 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.176315512 +0000 UTC m=+123.085234861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.743649 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.778153 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:44 crc kubenswrapper[5124]: E0126 00:10:44.778698 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.278680756 +0000 UTC m=+123.187600105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.879667 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:44 crc kubenswrapper[5124]: E0126 00:10:44.880059 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.380045153 +0000 UTC m=+123.288964502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.896505 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5hwt4"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.979115 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw"] Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.981087 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:44 crc kubenswrapper[5124]: E0126 00:10:44.981376 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.481359609 +0000 UTC m=+123.390278958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:44 crc kubenswrapper[5124]: I0126 00:10:44.999438 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" event={"ID":"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f","Type":"ContainerStarted","Data":"5550c9d24114d2b86df37d3cf1645f9455ef504a5b4c0810680a7b7c05ac758c"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.001206 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v" event={"ID":"b3a1a33e-2dab-43f6-8c34-6ac84e05eb03","Type":"ContainerStarted","Data":"80d5881bfe5a6e24139eaa1014de3f60521e4d4e81b2ff765797e1df3aed3ba5"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.002925 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" event={"ID":"1b3bd69c-7b97-42bb-9f12-7d690416e91f","Type":"ContainerStarted","Data":"99c7daae0f33a3fca4bd4608759acade9b9a63a147c7592e08d7d4e06436bc7a"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.005174 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" event={"ID":"4ad39e4e-4d41-443b-bfc7-a4ec7113664c","Type":"ContainerStarted","Data":"00fb941a0d3588a8f8710fd65bd50f8122777e2a9dea475f82f4512764400e84"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.010682 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" event={"ID":"dfdd3fba-e428-46ea-a831-e53d949c342a","Type":"ContainerStarted","Data":"514f7b0b9213aca700dd45b7970ec36aaf0d81751b06cd543929b7db008336f8"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.014447 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" event={"ID":"2046c412-f2fc-4d3e-97c7-fa57c6683752","Type":"ContainerStarted","Data":"233c1e74fc61325fcb3620b3610813ea4a08798c8466744bb47f36d9a3defec3"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.016378 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" event={"ID":"b9496837-38dd-4e08-bf40-9a191112e42a","Type":"ContainerStarted","Data":"a19702e6a5b8a65b8d7f5270b8b8edc0658203457559711858ad4d93e14cb3e9"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.026720 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.043838 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" event={"ID":"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1","Type":"ContainerStarted","Data":"de396a26df5f26f746f2d9d7faa163004e6d0089c1e9ce9a7c8a706d7cb7af2d"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.058022 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-dm2tt" event={"ID":"036651d1-0c52-4454-8385-bf3f84e19378","Type":"ContainerStarted","Data":"4ec23919a4d2a52dfa0dbf421a59683bbafd03cfb39a7902caecdea880479745"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.058066 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-dm2tt" event={"ID":"036651d1-0c52-4454-8385-bf3f84e19378","Type":"ContainerStarted","Data":"66d5ec531b195f62c6975c8f1db431517f68279cedce5911ec20021b748018b0"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.067456 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-87k2l" event={"ID":"fa9082e9-a8a6-433b-97ca-70128b99d6b7","Type":"ContainerStarted","Data":"fe8e278161e4f7d7efc408643af1cb3a9384aad7e4d5a4b9b094b0c303d99c39"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.067528 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-87k2l" event={"ID":"fa9082e9-a8a6-433b-97ca-70128b99d6b7","Type":"ContainerStarted","Data":"8b6b6a2205938d7e481c19039f97c495cd0f53625366a3a6d8f6c0ccd6772c61"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.082687 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:45 crc kubenswrapper[5124]: E0126 00:10:45.103158 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.603138368 +0000 UTC m=+123.512057717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.103754 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sctbw" event={"ID":"08aecd79-a3de-4a82-a0bb-2a1edf3d8c0b","Type":"ContainerStarted","Data":"a2f3b7701a4dda69af329f5175a14ae268d7c09bcbbfd15f55fe5d9d959677ea"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.104891 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" event={"ID":"6b5e4a3d-13f4-42c6-9adb-30a826411994","Type":"ContainerStarted","Data":"10c6a4589e681c94b599397a4c2ae422e4a67f64190640b3dacf594da6acf848"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.105895 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-nsc2v" event={"ID":"23eb49a3-e378-481a-932f-83ec71b22e6d","Type":"ContainerStarted","Data":"d2c8aa906e312bf6dbb0a8cedf67a1191ebf0906947e114eeb94eecbe6749afa"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.109889 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-vcw8h" event={"ID":"a219f23e-815a-42e8-82a6-941d1624c7d7","Type":"ContainerStarted","Data":"f04d8b22d5618495fbede848685998317618d4358dba6e6a9400d0c0e282ca3a"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.110787 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-vcw8h" Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.117203 5124 patch_prober.go:28] interesting pod/downloads-747b44746d-vcw8h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.117676 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-vcw8h" podUID="a219f23e-815a-42e8-82a6-941d1624c7d7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.152958 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" event={"ID":"5205d539-f164-46b4-858c-9ca958a1102a","Type":"ContainerStarted","Data":"abdd0f71dd36a49174b3a5d4cd279066d1c0fc3cb385a34ef87c6a7463a48131"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.154091 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n64rh" event={"ID":"b3398b97-1658-4344-afde-a15d309846c9","Type":"ContainerStarted","Data":"77f319a2e785178f5ab6c0e9d0ef6e6d8b428158ddf822288e0269b9fadbc1bc"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.154119 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n64rh" event={"ID":"b3398b97-1658-4344-afde-a15d309846c9","Type":"ContainerStarted","Data":"79dd21db6e96e026ea27dc892c76031b1a6b072e7d4fc364003047620559b743"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.162048 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" event={"ID":"cf1e5da6-8866-4e4d-bafe-84bc0f76c41f","Type":"ContainerStarted","Data":"92a14d229abe1ec8a669dd42c5d38c93c2d9368cb22003ce1fb82a092b3b08fa"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.168521 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" event={"ID":"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1","Type":"ContainerStarted","Data":"2d121a4623225c8f67f2a2f2b090f585a96ab8ce2b4891e105a97a43f5d0a04e"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.172188 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-b7nfk" event={"ID":"288efdc1-c138-42d5-9416-5c9d0faaa831","Type":"ContainerStarted","Data":"5b9b1383588cdfb3f4763e916c144bae830b64a006a8d6399d79a8f68a1fc6c9"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.172291 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-b7nfk" event={"ID":"288efdc1-c138-42d5-9416-5c9d0faaa831","Type":"ContainerStarted","Data":"a3e39967ac24d8a4d360f80295c637b654b776aaccdecac9fb7cb22945abc3c2"} Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.186188 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:45 crc kubenswrapper[5124]: E0126 00:10:45.186526 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.686502629 +0000 UTC m=+123.595421978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.192245 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.199881 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25"] Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.212368 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:45 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:45 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:45 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.212435 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:45 crc kubenswrapper[5124]: W0126 00:10:45.217806 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod839e8646_b712_4725_8456_806e52a3144c.slice/crio-bb19d76291c113b9d66ffcb9e92de02b4f254fc07312fe7f2e6b2e948d1a95fa WatchSource:0}: Error finding container bb19d76291c113b9d66ffcb9e92de02b4f254fc07312fe7f2e6b2e948d1a95fa: Status 404 returned error can't find the container with id bb19d76291c113b9d66ffcb9e92de02b4f254fc07312fe7f2e6b2e948d1a95fa Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.229949 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8"] Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.272655 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl"] Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.286624 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.298191 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:45 crc kubenswrapper[5124]: E0126 00:10:45.298542 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.798529519 +0000 UTC m=+123.707448868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.324144 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-lxzd9" podStartSLOduration=104.324129538 podStartE2EDuration="1m44.324129538s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:45.296804193 +0000 UTC m=+123.205723552" watchObservedRunningTime="2026-01-26 00:10:45.324129538 +0000 UTC m=+123.233048887" Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.357850 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j"] Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.382805 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-fqxww"] Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.402115 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:45 crc kubenswrapper[5124]: E0126 00:10:45.402351 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.902334501 +0000 UTC m=+123.811253850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.503282 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:45 crc kubenswrapper[5124]: E0126 00:10:45.503925 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.003912424 +0000 UTC m=+123.912831773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.604413 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:45 crc kubenswrapper[5124]: E0126 00:10:45.604807 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.104792259 +0000 UTC m=+124.013711608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.715323 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:45 crc kubenswrapper[5124]: E0126 00:10:45.715733 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.215715841 +0000 UTC m=+124.124635190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.817399 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:45 crc kubenswrapper[5124]: E0126 00:10:45.817663 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.317647943 +0000 UTC m=+124.226567292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.880783 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-vq8mw" podStartSLOduration=104.880766827 podStartE2EDuration="1m44.880766827s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:45.87939493 +0000 UTC m=+123.788314279" watchObservedRunningTime="2026-01-26 00:10:45.880766827 +0000 UTC m=+123.789686166" Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.919966 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:45 crc kubenswrapper[5124]: E0126 00:10:45.920271 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.420259804 +0000 UTC m=+124.329179153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:45 crc kubenswrapper[5124]: I0126 00:10:45.924744 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zbgtx" podStartSLOduration=104.924726202 podStartE2EDuration="1m44.924726202s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:45.919790511 +0000 UTC m=+123.828709860" watchObservedRunningTime="2026-01-26 00:10:45.924726202 +0000 UTC m=+123.833645551" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.021631 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.022004 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.521976101 +0000 UTC m=+124.430895450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.024800 5124 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-v5jrb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.024864 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" podUID="b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.123368 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.124551 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.62453863 +0000 UTC m=+124.533457979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.203838 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:46 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:46 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:46 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.203886 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.224361 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fqxww" event={"ID":"27a594f4-28ad-49d0-8ab7-f0c0ff14d65c","Type":"ContainerStarted","Data":"ab087d9fc7dc4c81196d0adf23ac117f096581f88a8306be0a9bfaca74a0b658"} Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.225409 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.226772 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.72673762 +0000 UTC m=+124.635656969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.232141 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.232551 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.732536823 +0000 UTC m=+124.641456162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.244130 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" podStartSLOduration=7.24411226 podStartE2EDuration="7.24411226s" podCreationTimestamp="2026-01-26 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:46.230715225 +0000 UTC m=+124.139634584" watchObservedRunningTime="2026-01-26 00:10:46.24411226 +0000 UTC m=+124.153031609" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.246895 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz"] Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.261826 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" event={"ID":"839e8646-b712-4725-8456-806e52a3144c","Type":"ContainerStarted","Data":"bb19d76291c113b9d66ffcb9e92de02b4f254fc07312fe7f2e6b2e948d1a95fa"} Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.266596 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nc9fk"] Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.274936 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g"] Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.276693 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw"] Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.278383 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-fpklc"] Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.282518 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" event={"ID":"1185cd69-7c6a-46f0-acf1-64d587996124","Type":"ContainerStarted","Data":"f70eef9f1fe83958e96c11f3f2a6c28742de9c1ebfa5336c6bf1a06d88340c20"} Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.286582 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" podStartSLOduration=105.286547995 podStartE2EDuration="1m45.286547995s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:46.282734234 +0000 UTC m=+124.191653583" watchObservedRunningTime="2026-01-26 00:10:46.286547995 +0000 UTC m=+124.195467344" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.290171 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-ns6rw"] Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.309727 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" event={"ID":"cf1e5da6-8866-4e4d-bafe-84bc0f76c41f","Type":"ContainerStarted","Data":"4828c2593bbebc64c23611bb6652f34ab2482962270e1fb01c7eafd99ba959e3"} Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.314258 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" event={"ID":"ec000458-4225-4aa1-b22e-244d7d137c9e","Type":"ContainerStarted","Data":"063d056570be8e978e10321a2193ef20724207c9f2defdc6d917769c766981a2"} Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.322460 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" event={"ID":"973d580d-7e62-419e-be96-115733ca98bf","Type":"ContainerStarted","Data":"e09a49c0f2cca84d58fcef42008b09f8dd94517e0d7ae07b317ca592bd050d97"} Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.322819 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs"] Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.331660 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" event={"ID":"acdc983c-4d4e-4a1e-82a3-a137fe39882a","Type":"ContainerStarted","Data":"8eccfd027be1754d9e541a74d57b4ca5fcf299da03361198c8914b88298b9c3f"} Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.332965 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.333241 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.833198312 +0000 UTC m=+124.742117661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.337378 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" event={"ID":"e811bf67-7a6d-4279-bbff-b2cf02f66558","Type":"ContainerStarted","Data":"efd21b936960fbc131360e95b5a0d98a711158d56be0f9ac998b9a4c598f6854"} Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.346671 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" event={"ID":"b9496837-38dd-4e08-bf40-9a191112e42a","Type":"ContainerStarted","Data":"d9ad20a1e06297b985803272e62d77c5591a1a4a2982e685415b3f894d3e38d3"} Jan 26 00:10:46 crc kubenswrapper[5124]: W0126 00:10:46.376410 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode451454a_5a94_4535_823c_523ea6f6f7de.slice/crio-650b45b2f3fc041c59642277de35247cd73e9884f484180564c05104ce150569 WatchSource:0}: Error finding container 650b45b2f3fc041c59642277de35247cd73e9884f484180564c05104ce150569: Status 404 returned error can't find the container with id 650b45b2f3fc041c59642277de35247cd73e9884f484180564c05104ce150569 Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.377517 5124 patch_prober.go:28] interesting pod/downloads-747b44746d-vcw8h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.378382 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-vcw8h" podUID="a219f23e-815a-42e8-82a6-941d1624c7d7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5124]: W0126 00:10:46.391607 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod498973e3_482d_4a19_9224_c3e67efc2a20.slice/crio-d009aa7b93873b554d0213a2bcda026ca1ef460b288e7a03b3f6888a0bcabc09 WatchSource:0}: Error finding container d009aa7b93873b554d0213a2bcda026ca1ef460b288e7a03b3f6888a0bcabc09: Status 404 returned error can't find the container with id d009aa7b93873b554d0213a2bcda026ca1ef460b288e7a03b3f6888a0bcabc09 Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.395362 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.395437 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" event={"ID":"8a006121-cc9c-46f5-98db-14148f556b11","Type":"ContainerStarted","Data":"c3c6c6e2b32d0758401677058d66ec62597d2aab53c630d614c08a6afe3a246d"} Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.399990 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k"] Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.420767 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-lvq9k"] Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.423602 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-s87zt"] Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.434684 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podStartSLOduration=105.434662082 podStartE2EDuration="1m45.434662082s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:46.428199781 +0000 UTC m=+124.337119130" watchObservedRunningTime="2026-01-26 00:10:46.434662082 +0000 UTC m=+124.343581431" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.435113 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.438276 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.938262209 +0000 UTC m=+124.847181558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.520533 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" podStartSLOduration=105.520517589 podStartE2EDuration="1m45.520517589s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:46.519221474 +0000 UTC m=+124.428140823" watchObservedRunningTime="2026-01-26 00:10:46.520517589 +0000 UTC m=+124.429436938" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.538856 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.539011 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.038984799 +0000 UTC m=+124.947904148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.539119 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.539535 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.039514993 +0000 UTC m=+124.948434342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.554912 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-b7nfk" podStartSLOduration=105.55489362 podStartE2EDuration="1m45.55489362s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:46.547741581 +0000 UTC m=+124.456660940" watchObservedRunningTime="2026-01-26 00:10:46.55489362 +0000 UTC m=+124.463812969" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.625717 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-6629f" podStartSLOduration=105.625696027 podStartE2EDuration="1m45.625696027s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:46.594048228 +0000 UTC m=+124.502967577" watchObservedRunningTime="2026-01-26 00:10:46.625696027 +0000 UTC m=+124.534615376" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.626939 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-sctbw" podStartSLOduration=105.626931281 podStartE2EDuration="1m45.626931281s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:46.623824698 +0000 UTC m=+124.532744047" watchObservedRunningTime="2026-01-26 00:10:46.626931281 +0000 UTC m=+124.535850650" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.643677 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.643814 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.143782717 +0000 UTC m=+125.052702066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.644723 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.649965 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.14993561 +0000 UTC m=+125.058854959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.671955 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-87k2l" podStartSLOduration=7.671936444 podStartE2EDuration="7.671936444s" podCreationTimestamp="2026-01-26 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:46.668193545 +0000 UTC m=+124.577112904" watchObservedRunningTime="2026-01-26 00:10:46.671936444 +0000 UTC m=+124.580855793" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.713057 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qdvls" podStartSLOduration=105.713043834 podStartE2EDuration="1m45.713043834s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:46.712026317 +0000 UTC m=+124.620945656" watchObservedRunningTime="2026-01-26 00:10:46.713043834 +0000 UTC m=+124.621963183" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.753200 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.757757 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.253893047 +0000 UTC m=+125.162812396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.798960 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29489760-dm2tt" podStartSLOduration=106.798938171 podStartE2EDuration="1m46.798938171s" podCreationTimestamp="2026-01-26 00:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:46.752458059 +0000 UTC m=+124.661377418" watchObservedRunningTime="2026-01-26 00:10:46.798938171 +0000 UTC m=+124.707857520" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.854750 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.855183 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.355169362 +0000 UTC m=+125.264088711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.864473 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-vcw8h" podStartSLOduration=105.864455089 podStartE2EDuration="1m45.864455089s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:46.833307703 +0000 UTC m=+124.742227052" watchObservedRunningTime="2026-01-26 00:10:46.864455089 +0000 UTC m=+124.773374438" Jan 26 00:10:46 crc kubenswrapper[5124]: I0126 00:10:46.960221 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:46 crc kubenswrapper[5124]: E0126 00:10:46.960470 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.460453694 +0000 UTC m=+125.369373043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.064350 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:47 crc kubenswrapper[5124]: E0126 00:10:47.064694 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.564678987 +0000 UTC m=+125.473598336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.167229 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:47 crc kubenswrapper[5124]: E0126 00:10:47.167743 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.667726089 +0000 UTC m=+125.576645438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.208807 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:47 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:47 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:47 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.208866 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.277347 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:47 crc kubenswrapper[5124]: E0126 00:10:47.277726 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.777711055 +0000 UTC m=+125.686630404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.378078 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:47 crc kubenswrapper[5124]: E0126 00:10:47.378196 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.878170469 +0000 UTC m=+125.787089818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.378710 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:47 crc kubenswrapper[5124]: E0126 00:10:47.379094 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.879083923 +0000 UTC m=+125.788003272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.420522 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" event={"ID":"e47ad1f1-7281-4a86-bac9-bbaa37dfeab1","Type":"ContainerStarted","Data":"f43e3eb6faf4903892301663ba6482f613c204dacde90b7f589c870fc086c6ec"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.458802 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v" event={"ID":"b3a1a33e-2dab-43f6-8c34-6ac84e05eb03","Type":"ContainerStarted","Data":"5d08e1dc779f116d7579979b323eb674087d13fc8818b5ed9482e8cc5850b7db"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.469101 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-nsc2v" podStartSLOduration=106.46908157 podStartE2EDuration="1m46.46908157s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:46.866066361 +0000 UTC m=+124.774985710" watchObservedRunningTime="2026-01-26 00:10:47.46908157 +0000 UTC m=+125.378000919" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.488001 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:47 crc kubenswrapper[5124]: E0126 00:10:47.488974 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:47.988951527 +0000 UTC m=+125.897870876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.509917 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" event={"ID":"1b3bd69c-7b97-42bb-9f12-7d690416e91f","Type":"ContainerStarted","Data":"a6c94d36a9d6f2d406018669e4559bef081c02aa2ecea5d3d8b2395c8b844441"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.509963 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" event={"ID":"1b3bd69c-7b97-42bb-9f12-7d690416e91f","Type":"ContainerStarted","Data":"9e8d51f48caa00596e4184018e68f4e8aabf7a221d23014258e0fd3f4c0fd23d"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.526399 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2xm5v" podStartSLOduration=106.526384149 podStartE2EDuration="1m46.526384149s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.525426734 +0000 UTC m=+125.434346083" watchObservedRunningTime="2026-01-26 00:10:47.526384149 +0000 UTC m=+125.435303498" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.527681 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mbllj" podStartSLOduration=106.527673093 podStartE2EDuration="1m46.527673093s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.478850799 +0000 UTC m=+125.387770148" watchObservedRunningTime="2026-01-26 00:10:47.527673093 +0000 UTC m=+125.436592442" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.546148 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" event={"ID":"4ad39e4e-4d41-443b-bfc7-a4ec7113664c","Type":"ContainerStarted","Data":"b93c2fe5eb46b72aea038dbb4367b1492c801dfc62991e8d043062da4eb763ba"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.546402 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" event={"ID":"4ad39e4e-4d41-443b-bfc7-a4ec7113664c","Type":"ContainerStarted","Data":"dbe4d5be8f3753792571d196c172f8057d1e6c297de89cbe886ced16ced8b03e"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.581224 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rb8jj" podStartSLOduration=106.581207453 podStartE2EDuration="1m46.581207453s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.571984788 +0000 UTC m=+125.480904137" watchObservedRunningTime="2026-01-26 00:10:47.581207453 +0000 UTC m=+125.490126802" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.589312 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:47 crc kubenswrapper[5124]: E0126 00:10:47.589834 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:48.089814441 +0000 UTC m=+125.998733790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.600885 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" event={"ID":"acdc983c-4d4e-4a1e-82a3-a137fe39882a","Type":"ContainerStarted","Data":"4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.602134 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.626503 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8cj7n" podStartSLOduration=106.626488283 podStartE2EDuration="1m46.626488283s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.624571312 +0000 UTC m=+125.533490671" watchObservedRunningTime="2026-01-26 00:10:47.626488283 +0000 UTC m=+125.535407632" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.628557 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" event={"ID":"dfdd3fba-e428-46ea-a831-e53d949c342a","Type":"ContainerStarted","Data":"ae6b6c2366f589a4e2430b3ba7a3166c16c5268d49448a25a23282ed93a74582"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.641541 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.655833 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" event={"ID":"e93a2f69-37f1-47bc-b659-8684acf34de3","Type":"ContainerStarted","Data":"e2ba197bc5010c96148d76c5c5bdad57ad3fe0abfe562cb4037e5604e08ae0de"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.668640 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" podStartSLOduration=106.668619401 podStartE2EDuration="1m46.668619401s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.666160585 +0000 UTC m=+125.575079954" watchObservedRunningTime="2026-01-26 00:10:47.668619401 +0000 UTC m=+125.577538750" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.669207 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" event={"ID":"8a006121-cc9c-46f5-98db-14148f556b11","Type":"ContainerStarted","Data":"270c89b3ed79fd757039ed4b802de1bbf2bfeb15d289098bd7347d7d14581d62"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.670175 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.675999 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-ns6rw" event={"ID":"9f09670d-b0a1-4fa2-9d30-7b82c260e38d","Type":"ContainerStarted","Data":"cd5e6db58fa7402d43e8a5c0645ad4b6e0adcca1b1ec7188aa4135e4c520b448"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.676046 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-ns6rw" event={"ID":"9f09670d-b0a1-4fa2-9d30-7b82c260e38d","Type":"ContainerStarted","Data":"bb6666c8176f1c1dbf2dd9fa87f018ada683654dbdb67dd1e65fec0ad8262cde"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.676769 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.682907 5124 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-5tzb8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.682955 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" podUID="8a006121-cc9c-46f5-98db-14148f556b11" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.690044 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:47 crc kubenswrapper[5124]: E0126 00:10:47.691582 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:48.191565449 +0000 UTC m=+126.100484798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.719031 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" podStartSLOduration=106.719003416 podStartE2EDuration="1m46.719003416s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.692278047 +0000 UTC m=+125.601197406" watchObservedRunningTime="2026-01-26 00:10:47.719003416 +0000 UTC m=+125.627922765" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.723769 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6np67" podStartSLOduration=106.723751402 podStartE2EDuration="1m46.723751402s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.722786186 +0000 UTC m=+125.631705545" watchObservedRunningTime="2026-01-26 00:10:47.723751402 +0000 UTC m=+125.632670751" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.727702 5124 patch_prober.go:28] interesting pod/console-operator-67c89758df-ns6rw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.727755 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-ns6rw" podUID="9f09670d-b0a1-4fa2-9d30-7b82c260e38d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.744323 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fqxww" event={"ID":"27a594f4-28ad-49d0-8ab7-f0c0ff14d65c","Type":"ContainerStarted","Data":"e638f26398b03bf0c1623cf4b5f20a50d00c1536ff2a2e63cd4518ade4a9aa2f"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.767817 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" event={"ID":"b14632cd-c5f4-41b7-be2f-71d6f7f2c264","Type":"ContainerStarted","Data":"9aa7e8652c05689df99d0f4133b06c4a6a85490e64de238630ef6851ec7eb3de"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.768039 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" event={"ID":"b14632cd-c5f4-41b7-be2f-71d6f7f2c264","Type":"ContainerStarted","Data":"f086afb2c466cc6df0d5657bb7fe643a508d088a67cc092f1855287a58afadc3"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.779447 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" event={"ID":"6b5e4a3d-13f4-42c6-9adb-30a826411994","Type":"ContainerStarted","Data":"db41153e6611726345cb212177c36e05f21f3387d3aef17a10e00e7311123152"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.792140 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nc9fk" event={"ID":"e451454a-5a94-4535-823c-523ea6f6f7de","Type":"ContainerStarted","Data":"afa610cc5dab653ff4bd99bfab7ab1f7b75749be5a8df38a3f594b4cfbfb5960"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.792185 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nc9fk" event={"ID":"e451454a-5a94-4535-823c-523ea6f6f7de","Type":"ContainerStarted","Data":"650b45b2f3fc041c59642277de35247cd73e9884f484180564c05104ce150569"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.793177 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:47 crc kubenswrapper[5124]: E0126 00:10:47.796359 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:48.296348437 +0000 UTC m=+126.205267776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.816914 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" event={"ID":"ec000458-4225-4aa1-b22e-244d7d137c9e","Type":"ContainerStarted","Data":"994ecb9a56d29303c10d4175e611484cc2a360a251fca944bb455ae8d5490f44"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.820124 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" event={"ID":"d76339a3-5850-4e27-be40-03180dc8e526","Type":"ContainerStarted","Data":"4a5083c3daeeeaaa07c0344155db37c38d85b5a2fa2b87e59841151bb1d7c8a6"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.824107 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fqxww" podStartSLOduration=106.824087262 podStartE2EDuration="1m46.824087262s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.788863119 +0000 UTC m=+125.697782468" watchObservedRunningTime="2026-01-26 00:10:47.824087262 +0000 UTC m=+125.733006611" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.830128 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" podStartSLOduration=106.830111452 podStartE2EDuration="1m46.830111452s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.8289019 +0000 UTC m=+125.737821259" watchObservedRunningTime="2026-01-26 00:10:47.830111452 +0000 UTC m=+125.739030801" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.839707 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" event={"ID":"973d580d-7e62-419e-be96-115733ca98bf","Type":"ContainerStarted","Data":"8e9091f8fed28f88cf73c06f29899ff7362d84ec97673a79cb6fcebd3feb183a"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.845142 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.858018 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" event={"ID":"2c16907d-1bcd-420c-879d-65a0552e69d3","Type":"ContainerStarted","Data":"a6069bcc11e12334b2799d9c0d35bcf66dee472addcb45c5eec3cb3e0e857220"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.858100 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" event={"ID":"2c16907d-1bcd-420c-879d-65a0552e69d3","Type":"ContainerStarted","Data":"1766d0b45ff852106bb11b4c5aa54ee8ece02c662952487275bb0abb128e6f5c"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.864655 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vp4mw"] Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.870997 5124 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-5hwt4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.871048 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" podUID="973d580d-7e62-419e-be96-115733ca98bf" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.877263 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-ns6rw" podStartSLOduration=106.877241321 podStartE2EDuration="1m46.877241321s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.852831565 +0000 UTC m=+125.761750914" watchObservedRunningTime="2026-01-26 00:10:47.877241321 +0000 UTC m=+125.786160670" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.879424 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zfncw" podStartSLOduration=106.879413759 podStartE2EDuration="1m46.879413759s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.871338685 +0000 UTC m=+125.780258034" watchObservedRunningTime="2026-01-26 00:10:47.879413759 +0000 UTC m=+125.788333108" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.906337 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:47 crc kubenswrapper[5124]: E0126 00:10:47.907284 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:48.407263098 +0000 UTC m=+126.316182447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.917666 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" event={"ID":"e811bf67-7a6d-4279-bbff-b2cf02f66558","Type":"ContainerStarted","Data":"ac7802b0c487473de0814fbc47608d9bbc9a26faa2dcd4462533e467bdd0abeb"} Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.918531 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.933029 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" podStartSLOduration=106.93300914 podStartE2EDuration="1m46.93300914s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.918862265 +0000 UTC m=+125.827781614" watchObservedRunningTime="2026-01-26 00:10:47.93300914 +0000 UTC m=+125.841928489" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.954949 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-sv2rt" podStartSLOduration=106.954926262 podStartE2EDuration="1m46.954926262s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.947372781 +0000 UTC m=+125.856292290" watchObservedRunningTime="2026-01-26 00:10:47.954926262 +0000 UTC m=+125.863845611" Jan 26 00:10:47 crc kubenswrapper[5124]: I0126 00:10:47.970028 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.005287 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" event={"ID":"460f5edc-0e33-44ee-b8ad-41e51e22924a","Type":"ContainerStarted","Data":"5dcc9d5742741420c73bdb5b739732d21d260c326e23f4c0549d68b8e63c46c9"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.005334 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" event={"ID":"460f5edc-0e33-44ee-b8ad-41e51e22924a","Type":"ContainerStarted","Data":"108248a4c04c3d2c084e51460fcb93af02bf429b84962be2d9e50cf497e0af79"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.005740 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.010345 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:48 crc kubenswrapper[5124]: E0126 00:10:48.011486 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:48.511473671 +0000 UTC m=+126.420393020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.011849 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vfn25" podStartSLOduration=107.01183531 podStartE2EDuration="1m47.01183531s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:47.978373523 +0000 UTC m=+125.887292872" watchObservedRunningTime="2026-01-26 00:10:48.01183531 +0000 UTC m=+125.920754659" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.012283 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-nc9fk" podStartSLOduration=9.012280492 podStartE2EDuration="9.012280492s" podCreationTimestamp="2026-01-26 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:48.009961101 +0000 UTC m=+125.918880440" watchObservedRunningTime="2026-01-26 00:10:48.012280492 +0000 UTC m=+125.921199841" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.041333 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" podStartSLOduration=107.041317342 podStartE2EDuration="1m47.041317342s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:48.040442068 +0000 UTC m=+125.949361427" watchObservedRunningTime="2026-01-26 00:10:48.041317342 +0000 UTC m=+125.950236691" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.073759 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz" event={"ID":"80cd99f0-6ac5-4187-9bdd-79dde0e74a57","Type":"ContainerStarted","Data":"8982ff9013ea67706444c225245cd8c7eae6ba0971f383d880b54041fc6811fe"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.073811 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz" event={"ID":"80cd99f0-6ac5-4187-9bdd-79dde0e74a57","Type":"ContainerStarted","Data":"60a3658497d6de17455408e4a227e16c1201f0612ce327b0354fbe5fb6e96925"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.079333 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sdxrl" podStartSLOduration=107.079316559 podStartE2EDuration="1m47.079316559s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:48.079028672 +0000 UTC m=+125.987948031" watchObservedRunningTime="2026-01-26 00:10:48.079316559 +0000 UTC m=+125.988235908" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.095425 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" event={"ID":"4fe9fae5-6a94-45aa-9fe5-086c9dddb3c1","Type":"ContainerStarted","Data":"ee9193eb62a98a09538edc6f28e8d93dc10c2d7b6a4b7824f64a0ac54e0845a9"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.112409 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:48 crc kubenswrapper[5124]: E0126 00:10:48.112560 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:48.61252997 +0000 UTC m=+126.521449319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.113017 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:48 crc kubenswrapper[5124]: E0126 00:10:48.113322 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:48.61331561 +0000 UTC m=+126.522234959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.130088 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" event={"ID":"839e8646-b712-4725-8456-806e52a3144c","Type":"ContainerStarted","Data":"08a9aaa2e4fb596d80dae9f2793f7b280bbb913f3b2ca473d00fd937ca2885f0"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.130989 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.152562 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" podStartSLOduration=107.15253908 podStartE2EDuration="1m47.15253908s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:48.109454519 +0000 UTC m=+126.018373868" watchObservedRunningTime="2026-01-26 00:10:48.15253908 +0000 UTC m=+126.061458429" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.155291 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" event={"ID":"498973e3-482d-4a19-9224-c3e67efc2a20","Type":"ContainerStarted","Data":"d009aa7b93873b554d0213a2bcda026ca1ef460b288e7a03b3f6888a0bcabc09"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.199315 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-csld6" podStartSLOduration=107.199297901 podStartE2EDuration="1m47.199297901s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:48.154063961 +0000 UTC m=+126.062983310" watchObservedRunningTime="2026-01-26 00:10:48.199297901 +0000 UTC m=+126.108217250" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.199910 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" event={"ID":"1185cd69-7c6a-46f0-acf1-64d587996124","Type":"ContainerStarted","Data":"962943e401a88d63685b2bdbefc856e9e19f01f60e044eefa84c4730b55b8b22"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.218543 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:48 crc kubenswrapper[5124]: E0126 00:10:48.219503 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:48.719486586 +0000 UTC m=+126.628405935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.221187 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:48 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:48 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:48 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.221284 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.233739 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" podStartSLOduration=107.233725543 podStartE2EDuration="1m47.233725543s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:48.200931844 +0000 UTC m=+126.109851193" watchObservedRunningTime="2026-01-26 00:10:48.233725543 +0000 UTC m=+126.142644892" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.234793 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" event={"ID":"5205d539-f164-46b4-858c-9ca958a1102a","Type":"ContainerStarted","Data":"63eaed45e1ebd94545a73f369e3cff46287670f9695ac0564aafadd603ec15c9"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.234819 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" event={"ID":"5205d539-f164-46b4-858c-9ca958a1102a","Type":"ContainerStarted","Data":"a6dd9ac2f0bd63d219c9d12a3335ae79799a6106386b9ebf71cc8b71e18dcb37"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.256227 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n64rh" event={"ID":"b3398b97-1658-4344-afde-a15d309846c9","Type":"ContainerStarted","Data":"7089fd216950b375c75fdfd47502e605fe65212304ba29bac51df70d68d74472"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.256873 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.261211 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" podStartSLOduration=107.261193742 podStartE2EDuration="1m47.261193742s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:48.260633087 +0000 UTC m=+126.169552456" watchObservedRunningTime="2026-01-26 00:10:48.261193742 +0000 UTC m=+126.170113081" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.282522 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" event={"ID":"cf1e5da6-8866-4e4d-bafe-84bc0f76c41f","Type":"ContainerStarted","Data":"1d4764cd1eecd6626ce8a52a844521ffb792adb2fbfe6b03c5a58680df559224"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.287686 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-xzm9l" podStartSLOduration=107.287668314 podStartE2EDuration="1m47.287668314s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:48.287391467 +0000 UTC m=+126.196310856" watchObservedRunningTime="2026-01-26 00:10:48.287668314 +0000 UTC m=+126.196587663" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.300844 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" event={"ID":"c696bafb-e286-4dc1-8edd-860c8c0564da","Type":"ContainerStarted","Data":"53666d1f32dc6670217ffc389f91d913cf2e232727a21330b6e03235895dd756"} Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.301252 5124 patch_prober.go:28] interesting pod/downloads-747b44746d-vcw8h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.301300 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-vcw8h" podUID="a219f23e-815a-42e8-82a6-941d1624c7d7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.302561 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" podUID="a69d5905-85d8-49b8-ab54-15fc8f104c31" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" gracePeriod=30 Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.322821 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:48 crc kubenswrapper[5124]: E0126 00:10:48.324829 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:48.824816539 +0000 UTC m=+126.733735888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.342829 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-n64rh" podStartSLOduration=9.342814116 podStartE2EDuration="9.342814116s" podCreationTimestamp="2026-01-26 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:48.33957813 +0000 UTC m=+126.248497489" watchObservedRunningTime="2026-01-26 00:10:48.342814116 +0000 UTC m=+126.251733465" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.406175 5124 scope.go:117] "RemoveContainer" containerID="6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.420213 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-wpz4s" podStartSLOduration=107.420191958 podStartE2EDuration="1m47.420191958s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:48.412523544 +0000 UTC m=+126.321442893" watchObservedRunningTime="2026-01-26 00:10:48.420191958 +0000 UTC m=+126.329111307" Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.453752 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:48 crc kubenswrapper[5124]: E0126 00:10:48.461175 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:48.961140784 +0000 UTC m=+126.870060133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.555328 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:48 crc kubenswrapper[5124]: E0126 00:10:48.555720 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:49.05570783 +0000 UTC m=+126.964627179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.663024 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:48 crc kubenswrapper[5124]: E0126 00:10:48.663250 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:49.163234512 +0000 UTC m=+127.072153861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.764164 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:48 crc kubenswrapper[5124]: E0126 00:10:48.764490 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:49.264478536 +0000 UTC m=+127.173397885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.869343 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:48 crc kubenswrapper[5124]: E0126 00:10:48.869561 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:49.369543151 +0000 UTC m=+127.278462500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:48 crc kubenswrapper[5124]: I0126 00:10:48.971267 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:48 crc kubenswrapper[5124]: E0126 00:10:48.972274 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:49.472261975 +0000 UTC m=+127.381181324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.073148 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:49 crc kubenswrapper[5124]: E0126 00:10:49.073514 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:49.573490629 +0000 UTC m=+127.482409978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.073787 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:49 crc kubenswrapper[5124]: E0126 00:10:49.074054 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:49.574047963 +0000 UTC m=+127.482967302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.131816 5124 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-zbjgw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.131920 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" podUID="839e8646-b712-4725-8456-806e52a3144c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.16:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.177075 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:49 crc kubenswrapper[5124]: E0126 00:10:49.177368 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:49.677349942 +0000 UTC m=+127.586269291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.216890 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:49 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:49 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:49 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.216947 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.278068 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:49 crc kubenswrapper[5124]: E0126 00:10:49.278386 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:49.778372491 +0000 UTC m=+127.687291830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.316975 5124 generic.go:358] "Generic (PLEG): container finished" podID="c696bafb-e286-4dc1-8edd-860c8c0564da" containerID="a765c7ffd90b19a9379e99366a1ecd2605c93ead776635aae8e68b85842d24b9" exitCode=0 Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.317056 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" event={"ID":"c696bafb-e286-4dc1-8edd-860c8c0564da","Type":"ContainerStarted","Data":"65f031684467c7f6bf4f23ce860d9f3dbe29725219763c92438f72eb9ce3428c"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.317100 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" event={"ID":"c696bafb-e286-4dc1-8edd-860c8c0564da","Type":"ContainerDied","Data":"a765c7ffd90b19a9379e99366a1ecd2605c93ead776635aae8e68b85842d24b9"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.331820 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jkc7k" event={"ID":"e93a2f69-37f1-47bc-b659-8684acf34de3","Type":"ContainerStarted","Data":"4227517983bf22cfa620a3f5c3ccaee365570b9a6188139437fb3131f8e1b3c7"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.335355 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fqxww" event={"ID":"27a594f4-28ad-49d0-8ab7-f0c0ff14d65c","Type":"ContainerStarted","Data":"2b56c7becc39d8a6864f9353d3db1b68e9e574ddf37ce9a05149115ed162b081"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.347515 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.354248 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b1e32d7d7a0137f2bc27ff1b6a2c7eadea48ec9c2b0832f560abf73951e16970"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.355319 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.359771 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" event={"ID":"d76339a3-5850-4e27-be40-03180dc8e526","Type":"ContainerStarted","Data":"32b54209c3395a816c991c369fe19fbe24e37344df5c726766ea529edb05deb2"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.359806 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" event={"ID":"d76339a3-5850-4e27-be40-03180dc8e526","Type":"ContainerStarted","Data":"b93da7112744b45cefc3a43f1ceb7531851d6b5fdff43842155effaba7b7cd6a"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.361903 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jk654"] Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.368534 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.369125 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" event={"ID":"2e062989-8ba6-44a5-8f95-e1958da237ad","Type":"ContainerStarted","Data":"acccf9d9d369aeaad930f73288d6755651ada67397eac8a1d47e48e9d2963448"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.370916 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" event={"ID":"460f5edc-0e33-44ee-b8ad-41e51e22924a","Type":"ContainerStarted","Data":"f11409a9023805a287072eddfcbba08aa3f7a517cb2b86bd7de80be9b0749a26"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.373921 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.375808 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz" event={"ID":"80cd99f0-6ac5-4187-9bdd-79dde0e74a57","Type":"ContainerStarted","Data":"d13243d06c4053ce685e54df542e9b38cb2d365f411d3809b6365a38c5b2354d"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.380074 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:49 crc kubenswrapper[5124]: E0126 00:10:49.380436 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:49.880420698 +0000 UTC m=+127.789340047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.381124 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jk654"] Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.391427 5124 generic.go:358] "Generic (PLEG): container finished" podID="498973e3-482d-4a19-9224-c3e67efc2a20" containerID="66a2f714e88c94403a22640cdf79e60bb2d9960f65a6ab1b58d37386f63bcdbd" exitCode=0 Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.391501 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" event={"ID":"498973e3-482d-4a19-9224-c3e67efc2a20","Type":"ContainerStarted","Data":"4e872114ff7d2535c887c4e23c82c538679ce7c7ed425999513166d0b47cc8c2"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.391523 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" event={"ID":"498973e3-482d-4a19-9224-c3e67efc2a20","Type":"ContainerDied","Data":"66a2f714e88c94403a22640cdf79e60bb2d9960f65a6ab1b58d37386f63bcdbd"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.393651 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=39.393641879 podStartE2EDuration="39.393641879s" podCreationTimestamp="2026-01-26 00:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:49.390192206 +0000 UTC m=+127.299111555" watchObservedRunningTime="2026-01-26 00:10:49.393641879 +0000 UTC m=+127.302561228" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.414071 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-t5442" event={"ID":"1185cd69-7c6a-46f0-acf1-64d587996124","Type":"ContainerStarted","Data":"b3eb099ba4d491569d12672016387f623699db87cf52eaa67ac716f38d4f4ec1"} Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.428895 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.443294 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5tzb8" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.471709 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9qgdz" podStartSLOduration=108.471692657 podStartE2EDuration="1m48.471692657s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:49.426433427 +0000 UTC m=+127.335352796" watchObservedRunningTime="2026-01-26 00:10:49.471692657 +0000 UTC m=+127.380612006" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.488309 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93d4050c-d7fd-40b6-bd58-133f961c4077-utilities\") pod \"certified-operators-jk654\" (UID: \"93d4050c-d7fd-40b6-bd58-133f961c4077\") " pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.488780 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93d4050c-d7fd-40b6-bd58-133f961c4077-catalog-content\") pod \"certified-operators-jk654\" (UID: \"93d4050c-d7fd-40b6-bd58-133f961c4077\") " pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.488915 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.489218 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf8pl\" (UniqueName: \"kubernetes.io/projected/93d4050c-d7fd-40b6-bd58-133f961c4077-kube-api-access-cf8pl\") pod \"certified-operators-jk654\" (UID: \"93d4050c-d7fd-40b6-bd58-133f961c4077\") " pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:10:49 crc kubenswrapper[5124]: E0126 00:10:49.525389 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:49.999454384 +0000 UTC m=+127.908373733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.527158 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-lvq9k" podStartSLOduration=108.527143728 podStartE2EDuration="1m48.527143728s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:49.521988651 +0000 UTC m=+127.430908020" watchObservedRunningTime="2026-01-26 00:10:49.527143728 +0000 UTC m=+127.436063067" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.585639 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-shkmx"] Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.630286 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.630408 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cf8pl\" (UniqueName: \"kubernetes.io/projected/93d4050c-d7fd-40b6-bd58-133f961c4077-kube-api-access-cf8pl\") pod \"certified-operators-jk654\" (UID: \"93d4050c-d7fd-40b6-bd58-133f961c4077\") " pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.630487 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93d4050c-d7fd-40b6-bd58-133f961c4077-utilities\") pod \"certified-operators-jk654\" (UID: \"93d4050c-d7fd-40b6-bd58-133f961c4077\") " pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:10:49 crc kubenswrapper[5124]: E0126 00:10:49.630538 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:50.130513359 +0000 UTC m=+128.039432708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.630676 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93d4050c-d7fd-40b6-bd58-133f961c4077-catalog-content\") pod \"certified-operators-jk654\" (UID: \"93d4050c-d7fd-40b6-bd58-133f961c4077\") " pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.630927 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93d4050c-d7fd-40b6-bd58-133f961c4077-utilities\") pod \"certified-operators-jk654\" (UID: \"93d4050c-d7fd-40b6-bd58-133f961c4077\") " pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.631103 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93d4050c-d7fd-40b6-bd58-133f961c4077-catalog-content\") pod \"certified-operators-jk654\" (UID: \"93d4050c-d7fd-40b6-bd58-133f961c4077\") " pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.650440 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" podStartSLOduration=108.650420136 podStartE2EDuration="1m48.650420136s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:49.646176854 +0000 UTC m=+127.555096273" watchObservedRunningTime="2026-01-26 00:10:49.650420136 +0000 UTC m=+127.559339485" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.669687 5124 ???:1] "http: TLS handshake error from 192.168.126.11:43182: no serving certificate available for the kubelet" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.673957 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf8pl\" (UniqueName: \"kubernetes.io/projected/93d4050c-d7fd-40b6-bd58-133f961c4077-kube-api-access-cf8pl\") pod \"certified-operators-jk654\" (UID: \"93d4050c-d7fd-40b6-bd58-133f961c4077\") " pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.732484 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:49 crc kubenswrapper[5124]: E0126 00:10:49.732836 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:50.232823022 +0000 UTC m=+128.141742371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.745366 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.763385 5124 ???:1] "http: TLS handshake error from 192.168.126.11:43186: no serving certificate available for the kubelet" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.834708 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:49 crc kubenswrapper[5124]: E0126 00:10:49.841376 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:50.341350539 +0000 UTC m=+128.250269888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.841691 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:49 crc kubenswrapper[5124]: E0126 00:10:49.842072 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:50.342059298 +0000 UTC m=+128.250978647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.859835 5124 ???:1] "http: TLS handshake error from 192.168.126.11:43200: no serving certificate available for the kubelet" Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.943090 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:49 crc kubenswrapper[5124]: E0126 00:10:49.943370 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:50.443342923 +0000 UTC m=+128.352262262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.943878 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:49 crc kubenswrapper[5124]: E0126 00:10:49.944178 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:50.444170585 +0000 UTC m=+128.353089934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:49 crc kubenswrapper[5124]: I0126 00:10:49.968430 5124 ???:1] "http: TLS handshake error from 192.168.126.11:43206: no serving certificate available for the kubelet" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.045263 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:50 crc kubenswrapper[5124]: E0126 00:10:50.045553 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:50.545537092 +0000 UTC m=+128.454456441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:50 crc kubenswrapper[5124]: W0126 00:10:50.057147 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93d4050c_d7fd_40b6_bd58_133f961c4077.slice/crio-c0fc93185bafc71ea165ce4feeb39bee289bb60989a3f867b1ad39aa1a2721fc WatchSource:0}: Error finding container c0fc93185bafc71ea165ce4feeb39bee289bb60989a3f867b1ad39aa1a2721fc: Status 404 returned error can't find the container with id c0fc93185bafc71ea165ce4feeb39bee289bb60989a3f867b1ad39aa1a2721fc Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.073847 5124 ???:1] "http: TLS handshake error from 192.168.126.11:43210: no serving certificate available for the kubelet" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.114540 5124 ???:1] "http: TLS handshake error from 192.168.126.11:43212: no serving certificate available for the kubelet" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.146457 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:50 crc kubenswrapper[5124]: E0126 00:10:50.146892 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:50.646873379 +0000 UTC m=+128.555792728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.194482 5124 ???:1] "http: TLS handshake error from 192.168.126.11:43222: no serving certificate available for the kubelet" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.203922 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:50 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:50 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:50 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.203997 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.247582 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:50 crc kubenswrapper[5124]: E0126 00:10:50.247876 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:50.747861137 +0000 UTC m=+128.656780486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.316707 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-zbjgw" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.316759 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-ns6rw" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.316772 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shkmx"] Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.316791 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nkp7h"] Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.317019 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.319412 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.320532 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nkp7h"] Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.320559 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rmhkx"] Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.320761 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.327979 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmhkx"] Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.328014 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jk654"] Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.328166 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.349147 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:50 crc kubenswrapper[5124]: E0126 00:10:50.349834 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:50.849550223 +0000 UTC m=+128.758469572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.448040 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" event={"ID":"c696bafb-e286-4dc1-8edd-860c8c0564da","Type":"ContainerStarted","Data":"7879d7ae96ebc3e266a657fbae2dfc3e17e2cbbf867024cb77646bdc808a274d"} Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.449674 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.449763 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/433ef7d9-9310-4fac-9271-fa7143485c0b-utilities\") pod \"certified-operators-nkp7h\" (UID: \"433ef7d9-9310-4fac-9271-fa7143485c0b\") " pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.449798 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ec6118e-bf44-44b1-8098-637ebd0083f7-utilities\") pod \"community-operators-shkmx\" (UID: \"5ec6118e-bf44-44b1-8098-637ebd0083f7\") " pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.449819 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/433ef7d9-9310-4fac-9271-fa7143485c0b-catalog-content\") pod \"certified-operators-nkp7h\" (UID: \"433ef7d9-9310-4fac-9271-fa7143485c0b\") " pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.449873 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w5n2\" (UniqueName: \"kubernetes.io/projected/af75a02f-4678-4afa-a8c2-acaddf134bc4-kube-api-access-4w5n2\") pod \"community-operators-rmhkx\" (UID: \"af75a02f-4678-4afa-a8c2-acaddf134bc4\") " pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.449895 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj67x\" (UniqueName: \"kubernetes.io/projected/5ec6118e-bf44-44b1-8098-637ebd0083f7-kube-api-access-pj67x\") pod \"community-operators-shkmx\" (UID: \"5ec6118e-bf44-44b1-8098-637ebd0083f7\") " pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.449919 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ec6118e-bf44-44b1-8098-637ebd0083f7-catalog-content\") pod \"community-operators-shkmx\" (UID: \"5ec6118e-bf44-44b1-8098-637ebd0083f7\") " pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.449953 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlcx8\" (UniqueName: \"kubernetes.io/projected/433ef7d9-9310-4fac-9271-fa7143485c0b-kube-api-access-xlcx8\") pod \"certified-operators-nkp7h\" (UID: \"433ef7d9-9310-4fac-9271-fa7143485c0b\") " pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.449997 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af75a02f-4678-4afa-a8c2-acaddf134bc4-utilities\") pod \"community-operators-rmhkx\" (UID: \"af75a02f-4678-4afa-a8c2-acaddf134bc4\") " pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.450015 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af75a02f-4678-4afa-a8c2-acaddf134bc4-catalog-content\") pod \"community-operators-rmhkx\" (UID: \"af75a02f-4678-4afa-a8c2-acaddf134bc4\") " pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:10:50 crc kubenswrapper[5124]: E0126 00:10:50.450101 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:50.950085479 +0000 UTC m=+128.859004828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.466911 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk654" event={"ID":"93d4050c-d7fd-40b6-bd58-133f961c4077","Type":"ContainerStarted","Data":"c0fc93185bafc71ea165ce4feeb39bee289bb60989a3f867b1ad39aa1a2721fc"} Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.536731 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" podStartSLOduration=109.536714257 podStartE2EDuration="1m49.536714257s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:50.531053436 +0000 UTC m=+128.439972815" watchObservedRunningTime="2026-01-26 00:10:50.536714257 +0000 UTC m=+128.445633606" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.551680 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ec6118e-bf44-44b1-8098-637ebd0083f7-catalog-content\") pod \"community-operators-shkmx\" (UID: \"5ec6118e-bf44-44b1-8098-637ebd0083f7\") " pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.551801 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xlcx8\" (UniqueName: \"kubernetes.io/projected/433ef7d9-9310-4fac-9271-fa7143485c0b-kube-api-access-xlcx8\") pod \"certified-operators-nkp7h\" (UID: \"433ef7d9-9310-4fac-9271-fa7143485c0b\") " pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.552045 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af75a02f-4678-4afa-a8c2-acaddf134bc4-utilities\") pod \"community-operators-rmhkx\" (UID: \"af75a02f-4678-4afa-a8c2-acaddf134bc4\") " pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.552077 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af75a02f-4678-4afa-a8c2-acaddf134bc4-catalog-content\") pod \"community-operators-rmhkx\" (UID: \"af75a02f-4678-4afa-a8c2-acaddf134bc4\") " pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.552151 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/433ef7d9-9310-4fac-9271-fa7143485c0b-utilities\") pod \"certified-operators-nkp7h\" (UID: \"433ef7d9-9310-4fac-9271-fa7143485c0b\") " pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.552248 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ec6118e-bf44-44b1-8098-637ebd0083f7-utilities\") pod \"community-operators-shkmx\" (UID: \"5ec6118e-bf44-44b1-8098-637ebd0083f7\") " pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.552336 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/433ef7d9-9310-4fac-9271-fa7143485c0b-catalog-content\") pod \"certified-operators-nkp7h\" (UID: \"433ef7d9-9310-4fac-9271-fa7143485c0b\") " pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.552425 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.552707 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4w5n2\" (UniqueName: \"kubernetes.io/projected/af75a02f-4678-4afa-a8c2-acaddf134bc4-kube-api-access-4w5n2\") pod \"community-operators-rmhkx\" (UID: \"af75a02f-4678-4afa-a8c2-acaddf134bc4\") " pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.552790 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pj67x\" (UniqueName: \"kubernetes.io/projected/5ec6118e-bf44-44b1-8098-637ebd0083f7-kube-api-access-pj67x\") pod \"community-operators-shkmx\" (UID: \"5ec6118e-bf44-44b1-8098-637ebd0083f7\") " pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.554273 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ec6118e-bf44-44b1-8098-637ebd0083f7-catalog-content\") pod \"community-operators-shkmx\" (UID: \"5ec6118e-bf44-44b1-8098-637ebd0083f7\") " pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.563272 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/433ef7d9-9310-4fac-9271-fa7143485c0b-catalog-content\") pod \"certified-operators-nkp7h\" (UID: \"433ef7d9-9310-4fac-9271-fa7143485c0b\") " pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.564476 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af75a02f-4678-4afa-a8c2-acaddf134bc4-utilities\") pod \"community-operators-rmhkx\" (UID: \"af75a02f-4678-4afa-a8c2-acaddf134bc4\") " pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.564804 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af75a02f-4678-4afa-a8c2-acaddf134bc4-catalog-content\") pod \"community-operators-rmhkx\" (UID: \"af75a02f-4678-4afa-a8c2-acaddf134bc4\") " pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:10:50 crc kubenswrapper[5124]: E0126 00:10:50.565131 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:51.065113039 +0000 UTC m=+128.974032478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.565872 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/433ef7d9-9310-4fac-9271-fa7143485c0b-utilities\") pod \"certified-operators-nkp7h\" (UID: \"433ef7d9-9310-4fac-9271-fa7143485c0b\") " pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.568163 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ec6118e-bf44-44b1-8098-637ebd0083f7-utilities\") pod \"community-operators-shkmx\" (UID: \"5ec6118e-bf44-44b1-8098-637ebd0083f7\") " pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.570784 5124 ???:1] "http: TLS handshake error from 192.168.126.11:43236: no serving certificate available for the kubelet" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.591988 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlcx8\" (UniqueName: \"kubernetes.io/projected/433ef7d9-9310-4fac-9271-fa7143485c0b-kube-api-access-xlcx8\") pod \"certified-operators-nkp7h\" (UID: \"433ef7d9-9310-4fac-9271-fa7143485c0b\") " pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.602114 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj67x\" (UniqueName: \"kubernetes.io/projected/5ec6118e-bf44-44b1-8098-637ebd0083f7-kube-api-access-pj67x\") pod \"community-operators-shkmx\" (UID: \"5ec6118e-bf44-44b1-8098-637ebd0083f7\") " pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.606474 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w5n2\" (UniqueName: \"kubernetes.io/projected/af75a02f-4678-4afa-a8c2-acaddf134bc4-kube-api-access-4w5n2\") pod \"community-operators-rmhkx\" (UID: \"af75a02f-4678-4afa-a8c2-acaddf134bc4\") " pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.634528 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.647804 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.657123 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:50 crc kubenswrapper[5124]: E0126 00:10:50.658408 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:51.158388352 +0000 UTC m=+129.067307701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.674548 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.759431 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:50 crc kubenswrapper[5124]: E0126 00:10:50.759832 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:51.259800531 +0000 UTC m=+129.168719880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.860574 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:50 crc kubenswrapper[5124]: E0126 00:10:50.860729 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:51.360701287 +0000 UTC m=+129.269620636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.860836 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:50 crc kubenswrapper[5124]: E0126 00:10:50.861237 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:51.36122161 +0000 UTC m=+129.270140959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:50 crc kubenswrapper[5124]: I0126 00:10:50.962024 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:50 crc kubenswrapper[5124]: E0126 00:10:50.962297 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:51.46228107 +0000 UTC m=+129.371200419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.063451 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:51 crc kubenswrapper[5124]: E0126 00:10:51.063830 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:51.563812232 +0000 UTC m=+129.472731581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.134764 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nkp7h"] Jan 26 00:10:51 crc kubenswrapper[5124]: W0126 00:10:51.155579 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod433ef7d9_9310_4fac_9271_fa7143485c0b.slice/crio-42c8c90f05e9d0113ecc8733bf9991d3c2507f7dd97ec1ab72fed67f71a3e7c2 WatchSource:0}: Error finding container 42c8c90f05e9d0113ecc8733bf9991d3c2507f7dd97ec1ab72fed67f71a3e7c2: Status 404 returned error can't find the container with id 42c8c90f05e9d0113ecc8733bf9991d3c2507f7dd97ec1ab72fed67f71a3e7c2 Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.164439 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:51 crc kubenswrapper[5124]: E0126 00:10:51.164699 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:51.664664786 +0000 UTC m=+129.573584135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.165061 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:51 crc kubenswrapper[5124]: E0126 00:10:51.165421 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:51.665409406 +0000 UTC m=+129.574328755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.204054 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:51 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:51 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:51 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.204121 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.265857 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:51 crc kubenswrapper[5124]: E0126 00:10:51.266627 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:51.766608789 +0000 UTC m=+129.675528138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.281234 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.283046 5124 ???:1] "http: TLS handshake error from 192.168.126.11:43242: no serving certificate available for the kubelet" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.287976 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.294835 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.295464 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.301602 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.366796 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4898t"] Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.368295 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.368350 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e867b2bf-b434-4fea-a6f9-6194fac536dd-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"e867b2bf-b434-4fea-a6f9-6194fac536dd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.368397 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e867b2bf-b434-4fea-a6f9-6194fac536dd-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"e867b2bf-b434-4fea-a6f9-6194fac536dd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:10:51 crc kubenswrapper[5124]: E0126 00:10:51.368668 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:51.868657035 +0000 UTC m=+129.777576374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.377551 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.380244 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.383034 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4898t"] Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.409458 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shkmx"] Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.413090 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmhkx"] Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.471201 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.471458 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b1669f-4753-4b71-bf6f-3b1972f4f33d-utilities\") pod \"redhat-marketplace-4898t\" (UID: \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\") " pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.471492 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e867b2bf-b434-4fea-a6f9-6194fac536dd-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"e867b2bf-b434-4fea-a6f9-6194fac536dd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.471523 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b1669f-4753-4b71-bf6f-3b1972f4f33d-catalog-content\") pod \"redhat-marketplace-4898t\" (UID: \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\") " pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.471556 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e867b2bf-b434-4fea-a6f9-6194fac536dd-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"e867b2bf-b434-4fea-a6f9-6194fac536dd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.471604 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jprtm\" (UniqueName: \"kubernetes.io/projected/67b1669f-4753-4b71-bf6f-3b1972f4f33d-kube-api-access-jprtm\") pod \"redhat-marketplace-4898t\" (UID: \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\") " pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:10:51 crc kubenswrapper[5124]: E0126 00:10:51.471713 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:51.971696516 +0000 UTC m=+129.880615865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.471750 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e867b2bf-b434-4fea-a6f9-6194fac536dd-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"e867b2bf-b434-4fea-a6f9-6194fac536dd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.499235 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e867b2bf-b434-4fea-a6f9-6194fac536dd-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"e867b2bf-b434-4fea-a6f9-6194fac536dd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.504724 5124 generic.go:358] "Generic (PLEG): container finished" podID="433ef7d9-9310-4fac-9271-fa7143485c0b" containerID="606707c67d893b97146fa8f78c4895e56a419fb8f503bc6e14e90a46cf97f59c" exitCode=0 Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.504833 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkp7h" event={"ID":"433ef7d9-9310-4fac-9271-fa7143485c0b","Type":"ContainerDied","Data":"606707c67d893b97146fa8f78c4895e56a419fb8f503bc6e14e90a46cf97f59c"} Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.504860 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkp7h" event={"ID":"433ef7d9-9310-4fac-9271-fa7143485c0b","Type":"ContainerStarted","Data":"42c8c90f05e9d0113ecc8733bf9991d3c2507f7dd97ec1ab72fed67f71a3e7c2"} Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.515884 5124 generic.go:358] "Generic (PLEG): container finished" podID="2c16907d-1bcd-420c-879d-65a0552e69d3" containerID="a6069bcc11e12334b2799d9c0d35bcf66dee472addcb45c5eec3cb3e0e857220" exitCode=0 Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.515982 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" event={"ID":"2c16907d-1bcd-420c-879d-65a0552e69d3","Type":"ContainerDied","Data":"a6069bcc11e12334b2799d9c0d35bcf66dee472addcb45c5eec3cb3e0e857220"} Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.519122 5124 generic.go:358] "Generic (PLEG): container finished" podID="93d4050c-d7fd-40b6-bd58-133f961c4077" containerID="962a60ef248cf1ab7721f0a553e5c60c33592ac0de743d71080fa266306631e9" exitCode=0 Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.519187 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk654" event={"ID":"93d4050c-d7fd-40b6-bd58-133f961c4077","Type":"ContainerDied","Data":"962a60ef248cf1ab7721f0a553e5c60c33592ac0de743d71080fa266306631e9"} Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.524046 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shkmx" event={"ID":"5ec6118e-bf44-44b1-8098-637ebd0083f7","Type":"ContainerStarted","Data":"99b9d8eac78e81d3e816527e0264d0b9a587cb8f8e12b6d81af1f7c75f908bb8"} Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.525768 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmhkx" event={"ID":"af75a02f-4678-4afa-a8c2-acaddf134bc4","Type":"ContainerStarted","Data":"719acc09216e8307e71e25002684075bd3deb615a21ab323e1ca1bb2b70e25fe"} Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.578775 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jprtm\" (UniqueName: \"kubernetes.io/projected/67b1669f-4753-4b71-bf6f-3b1972f4f33d-kube-api-access-jprtm\") pod \"redhat-marketplace-4898t\" (UID: \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\") " pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.578877 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.578907 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b1669f-4753-4b71-bf6f-3b1972f4f33d-utilities\") pod \"redhat-marketplace-4898t\" (UID: \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\") " pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.578956 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b1669f-4753-4b71-bf6f-3b1972f4f33d-catalog-content\") pod \"redhat-marketplace-4898t\" (UID: \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\") " pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.579367 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b1669f-4753-4b71-bf6f-3b1972f4f33d-catalog-content\") pod \"redhat-marketplace-4898t\" (UID: \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\") " pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:10:51 crc kubenswrapper[5124]: E0126 00:10:51.579637 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:52.079623948 +0000 UTC m=+129.988543297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.579964 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b1669f-4753-4b71-bf6f-3b1972f4f33d-utilities\") pod \"redhat-marketplace-4898t\" (UID: \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\") " pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.599272 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jprtm\" (UniqueName: \"kubernetes.io/projected/67b1669f-4753-4b71-bf6f-3b1972f4f33d-kube-api-access-jprtm\") pod \"redhat-marketplace-4898t\" (UID: \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\") " pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.619315 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.680488 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:51 crc kubenswrapper[5124]: E0126 00:10:51.681889 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:52.181872509 +0000 UTC m=+130.090791858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.713860 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.782459 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t4qj8"] Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.782793 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:51 crc kubenswrapper[5124]: E0126 00:10:51.783117 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:52.283103133 +0000 UTC m=+130.192022482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.788897 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.791222 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4qj8"] Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.884691 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.885285 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mf2w\" (UniqueName: \"kubernetes.io/projected/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-kube-api-access-6mf2w\") pod \"redhat-marketplace-t4qj8\" (UID: \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\") " pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:10:51 crc kubenswrapper[5124]: E0126 00:10:51.892049 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:52.3919829 +0000 UTC m=+130.300902249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.892117 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-utilities\") pod \"redhat-marketplace-t4qj8\" (UID: \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\") " pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.892194 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-catalog-content\") pod \"redhat-marketplace-t4qj8\" (UID: \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\") " pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.894770 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 26 00:10:51 crc kubenswrapper[5124]: W0126 00:10:51.909086 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode867b2bf_b434_4fea_a6f9_6194fac536dd.slice/crio-40ef2d2adea80e57178d9a633cb49059bd1f9f1759b8ebc44eeec79cf79f7751 WatchSource:0}: Error finding container 40ef2d2adea80e57178d9a633cb49059bd1f9f1759b8ebc44eeec79cf79f7751: Status 404 returned error can't find the container with id 40ef2d2adea80e57178d9a633cb49059bd1f9f1759b8ebc44eeec79cf79f7751 Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.993542 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6mf2w\" (UniqueName: \"kubernetes.io/projected/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-kube-api-access-6mf2w\") pod \"redhat-marketplace-t4qj8\" (UID: \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\") " pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.993605 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-utilities\") pod \"redhat-marketplace-t4qj8\" (UID: \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\") " pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.993962 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-catalog-content\") pod \"redhat-marketplace-t4qj8\" (UID: \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\") " pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.994017 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:51 crc kubenswrapper[5124]: E0126 00:10:51.994519 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:52.494507998 +0000 UTC m=+130.403427347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.995628 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-utilities\") pod \"redhat-marketplace-t4qj8\" (UID: \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\") " pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:10:51 crc kubenswrapper[5124]: I0126 00:10:51.996102 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-catalog-content\") pod \"redhat-marketplace-t4qj8\" (UID: \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\") " pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.026144 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4898t"] Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.027880 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mf2w\" (UniqueName: \"kubernetes.io/projected/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-kube-api-access-6mf2w\") pod \"redhat-marketplace-t4qj8\" (UID: \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\") " pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:10:52 crc kubenswrapper[5124]: W0126 00:10:52.041756 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67b1669f_4753_4b71_bf6f_3b1972f4f33d.slice/crio-96ce5cd946093f2830211550351d99eb448d2963ec3ca80cacfe6935eb94664f WatchSource:0}: Error finding container 96ce5cd946093f2830211550351d99eb448d2963ec3ca80cacfe6935eb94664f: Status 404 returned error can't find the container with id 96ce5cd946093f2830211550351d99eb448d2963ec3ca80cacfe6935eb94664f Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.096204 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.096378 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:52.596348248 +0000 UTC m=+130.505267597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.096576 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.096926 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:52.596918934 +0000 UTC m=+130.505838283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.121195 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.198046 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.198208 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:52.698182149 +0000 UTC m=+130.607101498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.198749 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.199063 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:52.699047442 +0000 UTC m=+130.607966791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.202742 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.217035 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:52 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:52 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:52 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.217092 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.300163 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.300731 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:52.800714757 +0000 UTC m=+130.709634106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.413975 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.414826 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:52.914813502 +0000 UTC m=+130.823732851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.514778 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.515251 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.015231005 +0000 UTC m=+130.924150354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.549618 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" event={"ID":"2e062989-8ba6-44a5-8f95-e1958da237ad","Type":"ContainerStarted","Data":"ae8782fdaacc4bb86e061cda4fef17e859d34c4535ed952371e5e2e87b49378b"} Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.552481 5124 generic.go:358] "Generic (PLEG): container finished" podID="67b1669f-4753-4b71-bf6f-3b1972f4f33d" containerID="257ee670e3b3eca172f20d10f08eb87301097803b111cc82d59a96773c86c0ba" exitCode=0 Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.552720 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4898t" event={"ID":"67b1669f-4753-4b71-bf6f-3b1972f4f33d","Type":"ContainerDied","Data":"257ee670e3b3eca172f20d10f08eb87301097803b111cc82d59a96773c86c0ba"} Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.552753 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4898t" event={"ID":"67b1669f-4753-4b71-bf6f-3b1972f4f33d","Type":"ContainerStarted","Data":"96ce5cd946093f2830211550351d99eb448d2963ec3ca80cacfe6935eb94664f"} Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.568978 5124 generic.go:358] "Generic (PLEG): container finished" podID="5ec6118e-bf44-44b1-8098-637ebd0083f7" containerID="e8ec1b7c1a9eb89bda875136238c0bda2e7a9f0fc56c0f42e2970b83c67ade57" exitCode=0 Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.569122 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shkmx" event={"ID":"5ec6118e-bf44-44b1-8098-637ebd0083f7","Type":"ContainerDied","Data":"e8ec1b7c1a9eb89bda875136238c0bda2e7a9f0fc56c0f42e2970b83c67ade57"} Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.575262 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"e867b2bf-b434-4fea-a6f9-6194fac536dd","Type":"ContainerStarted","Data":"48f6a75e18686c3631069eccfc97086dd13fc20980c1798db844efe8ba7b6cbb"} Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.575297 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"e867b2bf-b434-4fea-a6f9-6194fac536dd","Type":"ContainerStarted","Data":"40ef2d2adea80e57178d9a633cb49059bd1f9f1759b8ebc44eeec79cf79f7751"} Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.581038 5124 generic.go:358] "Generic (PLEG): container finished" podID="af75a02f-4678-4afa-a8c2-acaddf134bc4" containerID="f4d2744a8a3edc6305a2e1e8b3ea9f7a05ca48dec7024215f746a7b2eb61fe3d" exitCode=0 Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.581305 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmhkx" event={"ID":"af75a02f-4678-4afa-a8c2-acaddf134bc4","Type":"ContainerDied","Data":"f4d2744a8a3edc6305a2e1e8b3ea9f7a05ca48dec7024215f746a7b2eb61fe3d"} Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.582275 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7m58f"] Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.615734 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.616136 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.11612191 +0000 UTC m=+131.025041259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.617544 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=1.6175307669999999 podStartE2EDuration="1.617530767s" podCreationTimestamp="2026-01-26 00:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:52.615074492 +0000 UTC m=+130.523993841" watchObservedRunningTime="2026-01-26 00:10:52.617530767 +0000 UTC m=+130.526450116" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.627328 5124 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.628714 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7m58f"] Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.628884 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.631967 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.632423 5124 ???:1] "http: TLS handshake error from 192.168.126.11:56556: no serving certificate available for the kubelet" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.673176 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4qj8"] Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.717046 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.717323 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.217307073 +0000 UTC m=+131.126226422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.718150 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.718208 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb215dd-478e-4b23-b77c-5e741e026932-utilities\") pod \"redhat-operators-7m58f\" (UID: \"beb215dd-478e-4b23-b77c-5e741e026932\") " pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.718298 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb215dd-478e-4b23-b77c-5e741e026932-catalog-content\") pod \"redhat-operators-7m58f\" (UID: \"beb215dd-478e-4b23-b77c-5e741e026932\") " pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.718323 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf772\" (UniqueName: \"kubernetes.io/projected/beb215dd-478e-4b23-b77c-5e741e026932-kube-api-access-lf772\") pod \"redhat-operators-7m58f\" (UID: \"beb215dd-478e-4b23-b77c-5e741e026932\") " pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.718644 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.218631797 +0000 UTC m=+131.127551136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.819993 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.820135 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.320108479 +0000 UTC m=+131.229027828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.820208 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb215dd-478e-4b23-b77c-5e741e026932-catalog-content\") pod \"redhat-operators-7m58f\" (UID: \"beb215dd-478e-4b23-b77c-5e741e026932\") " pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.820254 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lf772\" (UniqueName: \"kubernetes.io/projected/beb215dd-478e-4b23-b77c-5e741e026932-kube-api-access-lf772\") pod \"redhat-operators-7m58f\" (UID: \"beb215dd-478e-4b23-b77c-5e741e026932\") " pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.820370 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.820415 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb215dd-478e-4b23-b77c-5e741e026932-utilities\") pod \"redhat-operators-7m58f\" (UID: \"beb215dd-478e-4b23-b77c-5e741e026932\") " pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.820889 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.320876189 +0000 UTC m=+131.229795538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.820990 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb215dd-478e-4b23-b77c-5e741e026932-utilities\") pod \"redhat-operators-7m58f\" (UID: \"beb215dd-478e-4b23-b77c-5e741e026932\") " pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.821285 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb215dd-478e-4b23-b77c-5e741e026932-catalog-content\") pod \"redhat-operators-7m58f\" (UID: \"beb215dd-478e-4b23-b77c-5e741e026932\") " pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.843569 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf772\" (UniqueName: \"kubernetes.io/projected/beb215dd-478e-4b23-b77c-5e741e026932-kube-api-access-lf772\") pod \"redhat-operators-7m58f\" (UID: \"beb215dd-478e-4b23-b77c-5e741e026932\") " pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.865315 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.921562 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.921636 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c16907d-1bcd-420c-879d-65a0552e69d3-config-volume\") pod \"2c16907d-1bcd-420c-879d-65a0552e69d3\" (UID: \"2c16907d-1bcd-420c-879d-65a0552e69d3\") " Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.921723 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c16907d-1bcd-420c-879d-65a0552e69d3-secret-volume\") pod \"2c16907d-1bcd-420c-879d-65a0552e69d3\" (UID: \"2c16907d-1bcd-420c-879d-65a0552e69d3\") " Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.921756 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5q88\" (UniqueName: \"kubernetes.io/projected/2c16907d-1bcd-420c-879d-65a0552e69d3-kube-api-access-z5q88\") pod \"2c16907d-1bcd-420c-879d-65a0552e69d3\" (UID: \"2c16907d-1bcd-420c-879d-65a0552e69d3\") " Jan 26 00:10:52 crc kubenswrapper[5124]: E0126 00:10:52.923316 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.423292424 +0000 UTC m=+131.332211773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.923772 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c16907d-1bcd-420c-879d-65a0552e69d3-config-volume" (OuterVolumeSpecName: "config-volume") pod "2c16907d-1bcd-420c-879d-65a0552e69d3" (UID: "2c16907d-1bcd-420c-879d-65a0552e69d3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.927576 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c16907d-1bcd-420c-879d-65a0552e69d3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2c16907d-1bcd-420c-879d-65a0552e69d3" (UID: "2c16907d-1bcd-420c-879d-65a0552e69d3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.930071 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c16907d-1bcd-420c-879d-65a0552e69d3-kube-api-access-z5q88" (OuterVolumeSpecName: "kube-api-access-z5q88") pod "2c16907d-1bcd-420c-879d-65a0552e69d3" (UID: "2c16907d-1bcd-420c-879d-65a0552e69d3"). InnerVolumeSpecName "kube-api-access-z5q88". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.957420 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hbcq8"] Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.958743 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c16907d-1bcd-420c-879d-65a0552e69d3" containerName="collect-profiles" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.958956 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c16907d-1bcd-420c-879d-65a0552e69d3" containerName="collect-profiles" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.959524 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c16907d-1bcd-420c-879d-65a0552e69d3" containerName="collect-profiles" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.959674 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.969644 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:10:52 crc kubenswrapper[5124]: I0126 00:10:52.979821 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hbcq8"] Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.022931 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft46g\" (UniqueName: \"kubernetes.io/projected/9075b91b-c638-4c64-95b7-1c58a6e5b132-kube-api-access-ft46g\") pod \"redhat-operators-hbcq8\" (UID: \"9075b91b-c638-4c64-95b7-1c58a6e5b132\") " pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.022963 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9075b91b-c638-4c64-95b7-1c58a6e5b132-catalog-content\") pod \"redhat-operators-hbcq8\" (UID: \"9075b91b-c638-4c64-95b7-1c58a6e5b132\") " pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.023054 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.023075 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9075b91b-c638-4c64-95b7-1c58a6e5b132-utilities\") pod \"redhat-operators-hbcq8\" (UID: \"9075b91b-c638-4c64-95b7-1c58a6e5b132\") " pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.023113 5124 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c16907d-1bcd-420c-879d-65a0552e69d3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.023123 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5q88\" (UniqueName: \"kubernetes.io/projected/2c16907d-1bcd-420c-879d-65a0552e69d3-kube-api-access-z5q88\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.023131 5124 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c16907d-1bcd-420c-879d-65a0552e69d3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:53 crc kubenswrapper[5124]: E0126 00:10:53.023361 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.523348697 +0000 UTC m=+131.432268046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.124468 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.125089 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9075b91b-c638-4c64-95b7-1c58a6e5b132-utilities\") pod \"redhat-operators-hbcq8\" (UID: \"9075b91b-c638-4c64-95b7-1c58a6e5b132\") " pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.125124 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ft46g\" (UniqueName: \"kubernetes.io/projected/9075b91b-c638-4c64-95b7-1c58a6e5b132-kube-api-access-ft46g\") pod \"redhat-operators-hbcq8\" (UID: \"9075b91b-c638-4c64-95b7-1c58a6e5b132\") " pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.125140 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9075b91b-c638-4c64-95b7-1c58a6e5b132-catalog-content\") pod \"redhat-operators-hbcq8\" (UID: \"9075b91b-c638-4c64-95b7-1c58a6e5b132\") " pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.125676 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9075b91b-c638-4c64-95b7-1c58a6e5b132-catalog-content\") pod \"redhat-operators-hbcq8\" (UID: \"9075b91b-c638-4c64-95b7-1c58a6e5b132\") " pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:10:53 crc kubenswrapper[5124]: E0126 00:10:53.125780 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.625763383 +0000 UTC m=+131.534682732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.126057 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9075b91b-c638-4c64-95b7-1c58a6e5b132-utilities\") pod \"redhat-operators-hbcq8\" (UID: \"9075b91b-c638-4c64-95b7-1c58a6e5b132\") " pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.146903 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft46g\" (UniqueName: \"kubernetes.io/projected/9075b91b-c638-4c64-95b7-1c58a6e5b132-kube-api-access-ft46g\") pod \"redhat-operators-hbcq8\" (UID: \"9075b91b-c638-4c64-95b7-1c58a6e5b132\") " pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.203724 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:53 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:53 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:53 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.203824 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.231035 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:53 crc kubenswrapper[5124]: E0126 00:10:53.231341 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.731328461 +0000 UTC m=+131.640247800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.307223 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.331714 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:53 crc kubenswrapper[5124]: E0126 00:10:53.331875 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.831841276 +0000 UTC m=+131.740760645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.332431 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:53 crc kubenswrapper[5124]: E0126 00:10:53.332761 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.832748151 +0000 UTC m=+131.741667500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-25hx6" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.383649 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7m58f"] Jan 26 00:10:53 crc kubenswrapper[5124]: W0126 00:10:53.417205 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeb215dd_478e_4b23_b77c_5e741e026932.slice/crio-2e984ef118349a8feef1f21a6a3ee57d7b6fe636ac627412c33ea58a2510f7f1 WatchSource:0}: Error finding container 2e984ef118349a8feef1f21a6a3ee57d7b6fe636ac627412c33ea58a2510f7f1: Status 404 returned error can't find the container with id 2e984ef118349a8feef1f21a6a3ee57d7b6fe636ac627412c33ea58a2510f7f1 Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.433898 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:53 crc kubenswrapper[5124]: E0126 00:10:53.434293 5124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:53.934275573 +0000 UTC m=+131.843194912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.522512 5124 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T00:10:52.627352338Z","UUID":"b0161ba1-56c1-4879-ab1b-125e83ca9b34","Handler":null,"Name":"","Endpoint":""} Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.526753 5124 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.526783 5124 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.536370 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.542980 5124 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.543018 5124 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.547461 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hbcq8"] Jan 26 00:10:53 crc kubenswrapper[5124]: W0126 00:10:53.562581 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9075b91b_c638_4c64_95b7_1c58a6e5b132.slice/crio-d10f942128e82b50b1ec4dca89df42a296108e4a76de95a739a0a9d03377f6d2 WatchSource:0}: Error finding container d10f942128e82b50b1ec4dca89df42a296108e4a76de95a739a0a9d03377f6d2: Status 404 returned error can't find the container with id d10f942128e82b50b1ec4dca89df42a296108e4a76de95a739a0a9d03377f6d2 Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.577371 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-25hx6\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.602119 5124 generic.go:358] "Generic (PLEG): container finished" podID="e73a9b84-fd97-46e5-a51c-8f4ca069c13b" containerID="79b75c06745c3bd4f5a9764a315cc46606fc2286965860de374f9cb012c20f06" exitCode=0 Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.602179 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4qj8" event={"ID":"e73a9b84-fd97-46e5-a51c-8f4ca069c13b","Type":"ContainerDied","Data":"79b75c06745c3bd4f5a9764a315cc46606fc2286965860de374f9cb012c20f06"} Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.602246 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4qj8" event={"ID":"e73a9b84-fd97-46e5-a51c-8f4ca069c13b","Type":"ContainerStarted","Data":"3593d7b608d712d73af5b8735b179064d6be4a9a2ef6be50226bc224c55dc29e"} Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.604871 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m58f" event={"ID":"beb215dd-478e-4b23-b77c-5e741e026932","Type":"ContainerStarted","Data":"2e984ef118349a8feef1f21a6a3ee57d7b6fe636ac627412c33ea58a2510f7f1"} Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.606786 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbcq8" event={"ID":"9075b91b-c638-4c64-95b7-1c58a6e5b132","Type":"ContainerStarted","Data":"d10f942128e82b50b1ec4dca89df42a296108e4a76de95a739a0a9d03377f6d2"} Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.611716 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" event={"ID":"2e062989-8ba6-44a5-8f95-e1958da237ad","Type":"ContainerStarted","Data":"98bbbd9dfc247c8039869484161a0e4cd0ed46153bb69755d10891e92adb5849"} Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.611775 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" event={"ID":"2e062989-8ba6-44a5-8f95-e1958da237ad","Type":"ContainerStarted","Data":"f0a2078477f7f1f25dace391e765f34f5cf6e55023b9290dee2aeb70e5e89a71"} Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.618354 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" event={"ID":"2c16907d-1bcd-420c-879d-65a0552e69d3","Type":"ContainerDied","Data":"1766d0b45ff852106bb11b4c5aa54ee8ece02c662952487275bb0abb128e6f5c"} Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.618392 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1766d0b45ff852106bb11b4c5aa54ee8ece02c662952487275bb0abb128e6f5c" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.618491 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-ldpxs" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.622533 5124 generic.go:358] "Generic (PLEG): container finished" podID="e867b2bf-b434-4fea-a6f9-6194fac536dd" containerID="48f6a75e18686c3631069eccfc97086dd13fc20980c1798db844efe8ba7b6cbb" exitCode=0 Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.622642 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"e867b2bf-b434-4fea-a6f9-6194fac536dd","Type":"ContainerDied","Data":"48f6a75e18686c3631069eccfc97086dd13fc20980c1798db844efe8ba7b6cbb"} Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.638774 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.642268 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-kwjfc" podStartSLOduration=14.642255887 podStartE2EDuration="14.642255887s" podCreationTimestamp="2026-01-26 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:53.640452469 +0000 UTC m=+131.549371808" watchObservedRunningTime="2026-01-26 00:10:53.642255887 +0000 UTC m=+131.551175236" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.648139 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.661604 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.669232 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.705995 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.706075 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.707538 5124 patch_prober.go:28] interesting pod/console-64d44f6ddf-b7nfk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.707739 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-b7nfk" podUID="288efdc1-c138-42d5-9416-5c9d0faaa831" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 26 00:10:53 crc kubenswrapper[5124]: I0126 00:10:53.870596 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-25hx6"] Jan 26 00:10:53 crc kubenswrapper[5124]: W0126 00:10:53.883296 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ce48d95_5f74_4d15_8f19_94cfd81c3dcf.slice/crio-b81aa5dd44d02238fab14ff86f412f84e62e671e33e2dac2d82ac0d9819fbb72 WatchSource:0}: Error finding container b81aa5dd44d02238fab14ff86f412f84e62e671e33e2dac2d82ac0d9819fbb72: Status 404 returned error can't find the container with id b81aa5dd44d02238fab14ff86f412f84e62e671e33e2dac2d82ac0d9819fbb72 Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.204278 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:54 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:54 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:54 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.204352 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.366444 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.366484 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.379904 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.380668 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.495787 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.496156 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.503422 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.635445 5124 generic.go:358] "Generic (PLEG): container finished" podID="beb215dd-478e-4b23-b77c-5e741e026932" containerID="7b8ddddc66633ed3855fa351c98bf8ace80162b9365fda3ddc9af06f1e2fcf04" exitCode=0 Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.635527 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m58f" event={"ID":"beb215dd-478e-4b23-b77c-5e741e026932","Type":"ContainerDied","Data":"7b8ddddc66633ed3855fa351c98bf8ace80162b9365fda3ddc9af06f1e2fcf04"} Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.638731 5124 generic.go:358] "Generic (PLEG): container finished" podID="9075b91b-c638-4c64-95b7-1c58a6e5b132" containerID="70e5e06288e381a3ec07580f62312e3f7a6d389ae86773648977674fac676d6f" exitCode=0 Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.639145 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbcq8" event={"ID":"9075b91b-c638-4c64-95b7-1c58a6e5b132","Type":"ContainerDied","Data":"70e5e06288e381a3ec07580f62312e3f7a6d389ae86773648977674fac676d6f"} Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.642512 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" event={"ID":"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf","Type":"ContainerStarted","Data":"b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b"} Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.642543 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" event={"ID":"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf","Type":"ContainerStarted","Data":"b81aa5dd44d02238fab14ff86f412f84e62e671e33e2dac2d82ac0d9819fbb72"} Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.647456 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-fpklc" Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.660069 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-s87zt" Jan 26 00:10:54 crc kubenswrapper[5124]: I0126 00:10:54.672823 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" podStartSLOduration=113.672808341 podStartE2EDuration="1m53.672808341s" podCreationTimestamp="2026-01-26 00:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:54.671555038 +0000 UTC m=+132.580474397" watchObservedRunningTime="2026-01-26 00:10:54.672808341 +0000 UTC m=+132.581727690" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.006943 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.135400 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.136863 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e867b2bf-b434-4fea-a6f9-6194fac536dd" containerName="pruner" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.136890 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="e867b2bf-b434-4fea-a6f9-6194fac536dd" containerName="pruner" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.136995 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="e867b2bf-b434-4fea-a6f9-6194fac536dd" containerName="pruner" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.172745 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e867b2bf-b434-4fea-a6f9-6194fac536dd-kube-api-access\") pod \"e867b2bf-b434-4fea-a6f9-6194fac536dd\" (UID: \"e867b2bf-b434-4fea-a6f9-6194fac536dd\") " Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.172952 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e867b2bf-b434-4fea-a6f9-6194fac536dd-kubelet-dir\") pod \"e867b2bf-b434-4fea-a6f9-6194fac536dd\" (UID: \"e867b2bf-b434-4fea-a6f9-6194fac536dd\") " Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.173072 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e867b2bf-b434-4fea-a6f9-6194fac536dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e867b2bf-b434-4fea-a6f9-6194fac536dd" (UID: "e867b2bf-b434-4fea-a6f9-6194fac536dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.173316 5124 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e867b2bf-b434-4fea-a6f9-6194fac536dd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.178984 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e867b2bf-b434-4fea-a6f9-6194fac536dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e867b2bf-b434-4fea-a6f9-6194fac536dd" (UID: "e867b2bf-b434-4fea-a6f9-6194fac536dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:55 crc kubenswrapper[5124]: E0126 00:10:55.179049 5124 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:10:55 crc kubenswrapper[5124]: E0126 00:10:55.180571 5124 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:10:55 crc kubenswrapper[5124]: E0126 00:10:55.182114 5124 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:10:55 crc kubenswrapper[5124]: E0126 00:10:55.182177 5124 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" podUID="a69d5905-85d8-49b8-ab54-15fc8f104c31" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.207697 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:55 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:55 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:55 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.207794 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.225125 5124 ???:1] "http: TLS handshake error from 192.168.126.11:56562: no serving certificate available for the kubelet" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.274130 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e867b2bf-b434-4fea-a6f9-6194fac536dd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.662882 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.662980 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.663122 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.664864 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.664900 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"e867b2bf-b434-4fea-a6f9-6194fac536dd","Type":"ContainerDied","Data":"40ef2d2adea80e57178d9a633cb49059bd1f9f1759b8ebc44eeec79cf79f7751"} Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.664944 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40ef2d2adea80e57178d9a633cb49059bd1f9f1759b8ebc44eeec79cf79f7751" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.667019 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.667113 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.691284 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01e8a3d0-83cc-43b6-b028-70688cd1c706-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"01e8a3d0-83cc-43b6-b028-70688cd1c706\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.691361 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01e8a3d0-83cc-43b6-b028-70688cd1c706-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"01e8a3d0-83cc-43b6-b028-70688cd1c706\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.792868 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01e8a3d0-83cc-43b6-b028-70688cd1c706-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"01e8a3d0-83cc-43b6-b028-70688cd1c706\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.792973 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01e8a3d0-83cc-43b6-b028-70688cd1c706-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"01e8a3d0-83cc-43b6-b028-70688cd1c706\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.794085 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01e8a3d0-83cc-43b6-b028-70688cd1c706-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"01e8a3d0-83cc-43b6-b028-70688cd1c706\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.811865 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01e8a3d0-83cc-43b6-b028-70688cd1c706-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"01e8a3d0-83cc-43b6-b028-70688cd1c706\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:10:55 crc kubenswrapper[5124]: I0126 00:10:55.999182 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:10:56 crc kubenswrapper[5124]: I0126 00:10:56.181552 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 26 00:10:56 crc kubenswrapper[5124]: W0126 00:10:56.188063 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod01e8a3d0_83cc_43b6_b028_70688cd1c706.slice/crio-2fc9fc28fbc02394d909d5db8f9d09e9826811032404d7dc8c5a09a2b4a6c944 WatchSource:0}: Error finding container 2fc9fc28fbc02394d909d5db8f9d09e9826811032404d7dc8c5a09a2b4a6c944: Status 404 returned error can't find the container with id 2fc9fc28fbc02394d909d5db8f9d09e9826811032404d7dc8c5a09a2b4a6c944 Jan 26 00:10:56 crc kubenswrapper[5124]: I0126 00:10:56.202541 5124 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-9jvql container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:10:56 crc kubenswrapper[5124]: [-]has-synced failed: reason withheld Jan 26 00:10:56 crc kubenswrapper[5124]: [+]process-running ok Jan 26 00:10:56 crc kubenswrapper[5124]: healthz check failed Jan 26 00:10:56 crc kubenswrapper[5124]: I0126 00:10:56.202613 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" podUID="c2cd8439-aeb3-4321-9842-11b3cbb37b0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:10:56 crc kubenswrapper[5124]: I0126 00:10:56.470070 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-n64rh" Jan 26 00:10:56 crc kubenswrapper[5124]: I0126 00:10:56.658746 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"01e8a3d0-83cc-43b6-b028-70688cd1c706","Type":"ContainerStarted","Data":"2fc9fc28fbc02394d909d5db8f9d09e9826811032404d7dc8c5a09a2b4a6c944"} Jan 26 00:10:57 crc kubenswrapper[5124]: I0126 00:10:57.219427 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:57 crc kubenswrapper[5124]: I0126 00:10:57.223497 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-9jvql" Jan 26 00:10:58 crc kubenswrapper[5124]: I0126 00:10:58.314741 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-vcw8h" Jan 26 00:11:00 crc kubenswrapper[5124]: I0126 00:11:00.375007 5124 ???:1] "http: TLS handshake error from 192.168.126.11:56564: no serving certificate available for the kubelet" Jan 26 00:11:00 crc kubenswrapper[5124]: I0126 00:11:00.474092 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:02 crc kubenswrapper[5124]: I0126 00:11:02.691275 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkp7h" event={"ID":"433ef7d9-9310-4fac-9271-fa7143485c0b","Type":"ContainerStarted","Data":"0cb9c82e5635ddf7f2ffd0484dedb5c41bde17c135cd11d54ccbad42e73e2f2a"} Jan 26 00:11:02 crc kubenswrapper[5124]: I0126 00:11:02.696038 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"01e8a3d0-83cc-43b6-b028-70688cd1c706","Type":"ContainerStarted","Data":"03244dfe9eb2bef120246d8d2eebd9f0abc9240783ef62f4bc0581c5f47483a3"} Jan 26 00:11:02 crc kubenswrapper[5124]: I0126 00:11:02.699519 5124 generic.go:358] "Generic (PLEG): container finished" podID="67b1669f-4753-4b71-bf6f-3b1972f4f33d" containerID="0524988e08b7745561df4411beff1a274b89c41c7774c5ca9de4a2a607d5bdda" exitCode=0 Jan 26 00:11:02 crc kubenswrapper[5124]: I0126 00:11:02.699712 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4898t" event={"ID":"67b1669f-4753-4b71-bf6f-3b1972f4f33d","Type":"ContainerDied","Data":"0524988e08b7745561df4411beff1a274b89c41c7774c5ca9de4a2a607d5bdda"} Jan 26 00:11:02 crc kubenswrapper[5124]: I0126 00:11:02.701449 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk654" event={"ID":"93d4050c-d7fd-40b6-bd58-133f961c4077","Type":"ContainerStarted","Data":"12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060"} Jan 26 00:11:02 crc kubenswrapper[5124]: I0126 00:11:02.704702 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shkmx" event={"ID":"5ec6118e-bf44-44b1-8098-637ebd0083f7","Type":"ContainerStarted","Data":"6358f2631a13523f6a6804dc25a3f3787b165a85900798cc2210a947185a7a1d"} Jan 26 00:11:02 crc kubenswrapper[5124]: I0126 00:11:02.707146 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmhkx" event={"ID":"af75a02f-4678-4afa-a8c2-acaddf134bc4","Type":"ContainerStarted","Data":"5255f303ab49582b90357971883d487ca18ae223be49dfd4b69be669f57504bf"} Jan 26 00:11:02 crc kubenswrapper[5124]: I0126 00:11:02.712028 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4qj8" event={"ID":"e73a9b84-fd97-46e5-a51c-8f4ca069c13b","Type":"ContainerStarted","Data":"343b521a008186a07c5214e473c9e4646c1715392f3b44b70553210abbb38c1c"} Jan 26 00:11:02 crc kubenswrapper[5124]: I0126 00:11:02.855572 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=7.8555550610000004 podStartE2EDuration="7.855555061s" podCreationTimestamp="2026-01-26 00:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:02.846098526 +0000 UTC m=+140.755017875" watchObservedRunningTime="2026-01-26 00:11:02.855555061 +0000 UTC m=+140.764474410" Jan 26 00:11:02 crc kubenswrapper[5124]: E0126 00:11:02.912760 5124 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93d4050c_d7fd_40b6_bd58_133f961c4077.slice/crio-12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060.scope\": RecentStats: unable to find data in memory cache]" Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.712675 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.718242 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-b7nfk" Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.723179 5124 generic.go:358] "Generic (PLEG): container finished" podID="93d4050c-d7fd-40b6-bd58-133f961c4077" containerID="12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060" exitCode=0 Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.723272 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk654" event={"ID":"93d4050c-d7fd-40b6-bd58-133f961c4077","Type":"ContainerDied","Data":"12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060"} Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.733077 5124 generic.go:358] "Generic (PLEG): container finished" podID="5ec6118e-bf44-44b1-8098-637ebd0083f7" containerID="6358f2631a13523f6a6804dc25a3f3787b165a85900798cc2210a947185a7a1d" exitCode=0 Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.733159 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shkmx" event={"ID":"5ec6118e-bf44-44b1-8098-637ebd0083f7","Type":"ContainerDied","Data":"6358f2631a13523f6a6804dc25a3f3787b165a85900798cc2210a947185a7a1d"} Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.737943 5124 generic.go:358] "Generic (PLEG): container finished" podID="af75a02f-4678-4afa-a8c2-acaddf134bc4" containerID="5255f303ab49582b90357971883d487ca18ae223be49dfd4b69be669f57504bf" exitCode=0 Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.737979 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmhkx" event={"ID":"af75a02f-4678-4afa-a8c2-acaddf134bc4","Type":"ContainerDied","Data":"5255f303ab49582b90357971883d487ca18ae223be49dfd4b69be669f57504bf"} Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.742088 5124 generic.go:358] "Generic (PLEG): container finished" podID="e73a9b84-fd97-46e5-a51c-8f4ca069c13b" containerID="343b521a008186a07c5214e473c9e4646c1715392f3b44b70553210abbb38c1c" exitCode=0 Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.742288 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4qj8" event={"ID":"e73a9b84-fd97-46e5-a51c-8f4ca069c13b","Type":"ContainerDied","Data":"343b521a008186a07c5214e473c9e4646c1715392f3b44b70553210abbb38c1c"} Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.742312 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4qj8" event={"ID":"e73a9b84-fd97-46e5-a51c-8f4ca069c13b","Type":"ContainerStarted","Data":"514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150"} Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.743883 5124 generic.go:358] "Generic (PLEG): container finished" podID="433ef7d9-9310-4fac-9271-fa7143485c0b" containerID="0cb9c82e5635ddf7f2ffd0484dedb5c41bde17c135cd11d54ccbad42e73e2f2a" exitCode=0 Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.743936 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkp7h" event={"ID":"433ef7d9-9310-4fac-9271-fa7143485c0b","Type":"ContainerDied","Data":"0cb9c82e5635ddf7f2ffd0484dedb5c41bde17c135cd11d54ccbad42e73e2f2a"} Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.746524 5124 generic.go:358] "Generic (PLEG): container finished" podID="01e8a3d0-83cc-43b6-b028-70688cd1c706" containerID="03244dfe9eb2bef120246d8d2eebd9f0abc9240783ef62f4bc0581c5f47483a3" exitCode=0 Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.746641 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"01e8a3d0-83cc-43b6-b028-70688cd1c706","Type":"ContainerDied","Data":"03244dfe9eb2bef120246d8d2eebd9f0abc9240783ef62f4bc0581c5f47483a3"} Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.752669 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4898t" event={"ID":"67b1669f-4753-4b71-bf6f-3b1972f4f33d","Type":"ContainerStarted","Data":"19abfeb851e1ad15dae47c652f6d05276eef1067a9497556f6d532afe731a544"} Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.863990 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4898t" podStartSLOduration=3.141848437 podStartE2EDuration="12.863971657s" podCreationTimestamp="2026-01-26 00:10:51 +0000 UTC" firstStartedPulling="2026-01-26 00:10:52.55387963 +0000 UTC m=+130.462798979" lastFinishedPulling="2026-01-26 00:11:02.27600285 +0000 UTC m=+140.184922199" observedRunningTime="2026-01-26 00:11:03.86296176 +0000 UTC m=+141.771881119" watchObservedRunningTime="2026-01-26 00:11:03.863971657 +0000 UTC m=+141.772891006" Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.882081 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t4qj8" podStartSLOduration=4.197397815 podStartE2EDuration="12.882068394s" podCreationTimestamp="2026-01-26 00:10:51 +0000 UTC" firstStartedPulling="2026-01-26 00:10:53.603312224 +0000 UTC m=+131.512231563" lastFinishedPulling="2026-01-26 00:11:02.287982793 +0000 UTC m=+140.196902142" observedRunningTime="2026-01-26 00:11:03.880333048 +0000 UTC m=+141.789252407" watchObservedRunningTime="2026-01-26 00:11:03.882068394 +0000 UTC m=+141.790987733" Jan 26 00:11:03 crc kubenswrapper[5124]: I0126 00:11:03.913719 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:11:04 crc kubenswrapper[5124]: I0126 00:11:04.762166 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk654" event={"ID":"93d4050c-d7fd-40b6-bd58-133f961c4077","Type":"ContainerStarted","Data":"4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f"} Jan 26 00:11:04 crc kubenswrapper[5124]: I0126 00:11:04.768780 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shkmx" event={"ID":"5ec6118e-bf44-44b1-8098-637ebd0083f7","Type":"ContainerStarted","Data":"c76f97227b6c37dc8fa602630009930435dc45b0801d70920b6538fb8dc1cb5c"} Jan 26 00:11:04 crc kubenswrapper[5124]: I0126 00:11:04.771900 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmhkx" event={"ID":"af75a02f-4678-4afa-a8c2-acaddf134bc4","Type":"ContainerStarted","Data":"3699ec5e26d03697e3fb3bebb29d4f6cd73bd96ca510a4692216c21325bd7bdd"} Jan 26 00:11:04 crc kubenswrapper[5124]: I0126 00:11:04.777545 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkp7h" event={"ID":"433ef7d9-9310-4fac-9271-fa7143485c0b","Type":"ContainerStarted","Data":"cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86"} Jan 26 00:11:04 crc kubenswrapper[5124]: I0126 00:11:04.783376 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jk654" podStartSLOduration=4.993785354 podStartE2EDuration="15.783361862s" podCreationTimestamp="2026-01-26 00:10:49 +0000 UTC" firstStartedPulling="2026-01-26 00:10:51.519999588 +0000 UTC m=+129.428918937" lastFinishedPulling="2026-01-26 00:11:02.309576096 +0000 UTC m=+140.218495445" observedRunningTime="2026-01-26 00:11:04.783217458 +0000 UTC m=+142.692136807" watchObservedRunningTime="2026-01-26 00:11:04.783361862 +0000 UTC m=+142.692281211" Jan 26 00:11:04 crc kubenswrapper[5124]: I0126 00:11:04.800678 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nkp7h" podStartSLOduration=5.018277619 podStartE2EDuration="15.800664978s" podCreationTimestamp="2026-01-26 00:10:49 +0000 UTC" firstStartedPulling="2026-01-26 00:10:51.505574404 +0000 UTC m=+129.414493753" lastFinishedPulling="2026-01-26 00:11:02.287961763 +0000 UTC m=+140.196881112" observedRunningTime="2026-01-26 00:11:04.797450122 +0000 UTC m=+142.706369491" watchObservedRunningTime="2026-01-26 00:11:04.800664978 +0000 UTC m=+142.709584327" Jan 26 00:11:04 crc kubenswrapper[5124]: I0126 00:11:04.826832 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-shkmx" podStartSLOduration=6.108696614 podStartE2EDuration="15.826820004s" podCreationTimestamp="2026-01-26 00:10:49 +0000 UTC" firstStartedPulling="2026-01-26 00:10:52.569844233 +0000 UTC m=+130.478763582" lastFinishedPulling="2026-01-26 00:11:02.287967623 +0000 UTC m=+140.196886972" observedRunningTime="2026-01-26 00:11:04.826581178 +0000 UTC m=+142.735500537" watchObservedRunningTime="2026-01-26 00:11:04.826820004 +0000 UTC m=+142.735739353" Jan 26 00:11:04 crc kubenswrapper[5124]: I0126 00:11:04.844965 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rmhkx" podStartSLOduration=6.130134996 podStartE2EDuration="15.844947293s" podCreationTimestamp="2026-01-26 00:10:49 +0000 UTC" firstStartedPulling="2026-01-26 00:10:52.581980595 +0000 UTC m=+130.490899944" lastFinishedPulling="2026-01-26 00:11:02.296792892 +0000 UTC m=+140.205712241" observedRunningTime="2026-01-26 00:11:04.843525765 +0000 UTC m=+142.752445124" watchObservedRunningTime="2026-01-26 00:11:04.844947293 +0000 UTC m=+142.753866642" Jan 26 00:11:05 crc kubenswrapper[5124]: I0126 00:11:05.136303 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:05 crc kubenswrapper[5124]: E0126 00:11:05.176651 5124 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:11:05 crc kubenswrapper[5124]: E0126 00:11:05.179073 5124 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:11:05 crc kubenswrapper[5124]: E0126 00:11:05.183630 5124 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:11:05 crc kubenswrapper[5124]: E0126 00:11:05.183705 5124 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" podUID="a69d5905-85d8-49b8-ab54-15fc8f104c31" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:11:05 crc kubenswrapper[5124]: I0126 00:11:05.313964 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01e8a3d0-83cc-43b6-b028-70688cd1c706-kubelet-dir\") pod \"01e8a3d0-83cc-43b6-b028-70688cd1c706\" (UID: \"01e8a3d0-83cc-43b6-b028-70688cd1c706\") " Jan 26 00:11:05 crc kubenswrapper[5124]: I0126 00:11:05.314025 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01e8a3d0-83cc-43b6-b028-70688cd1c706-kube-api-access\") pod \"01e8a3d0-83cc-43b6-b028-70688cd1c706\" (UID: \"01e8a3d0-83cc-43b6-b028-70688cd1c706\") " Jan 26 00:11:05 crc kubenswrapper[5124]: I0126 00:11:05.314089 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e8a3d0-83cc-43b6-b028-70688cd1c706-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "01e8a3d0-83cc-43b6-b028-70688cd1c706" (UID: "01e8a3d0-83cc-43b6-b028-70688cd1c706"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5124]: I0126 00:11:05.315461 5124 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01e8a3d0-83cc-43b6-b028-70688cd1c706-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5124]: I0126 00:11:05.319809 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e8a3d0-83cc-43b6-b028-70688cd1c706-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "01e8a3d0-83cc-43b6-b028-70688cd1c706" (UID: "01e8a3d0-83cc-43b6-b028-70688cd1c706"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5124]: I0126 00:11:05.416531 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01e8a3d0-83cc-43b6-b028-70688cd1c706-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5124]: I0126 00:11:05.797404 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:05 crc kubenswrapper[5124]: I0126 00:11:05.797395 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"01e8a3d0-83cc-43b6-b028-70688cd1c706","Type":"ContainerDied","Data":"2fc9fc28fbc02394d909d5db8f9d09e9826811032404d7dc8c5a09a2b4a6c944"} Jan 26 00:11:05 crc kubenswrapper[5124]: I0126 00:11:05.797465 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fc9fc28fbc02394d909d5db8f9d09e9826811032404d7dc8c5a09a2b4a6c944" Jan 26 00:11:09 crc kubenswrapper[5124]: I0126 00:11:09.746355 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:11:09 crc kubenswrapper[5124]: I0126 00:11:09.746955 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:11:09 crc kubenswrapper[5124]: I0126 00:11:09.853696 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:11:09 crc kubenswrapper[5124]: I0126 00:11:09.896375 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:11:10 crc kubenswrapper[5124]: I0126 00:11:10.635638 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:11:10 crc kubenswrapper[5124]: I0126 00:11:10.636050 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:11:10 crc kubenswrapper[5124]: I0126 00:11:10.636099 5124 ???:1] "http: TLS handshake error from 192.168.126.11:60352: no serving certificate available for the kubelet" Jan 26 00:11:10 crc kubenswrapper[5124]: I0126 00:11:10.649169 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:11:10 crc kubenswrapper[5124]: I0126 00:11:10.649250 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:11:10 crc kubenswrapper[5124]: I0126 00:11:10.676199 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:11:10 crc kubenswrapper[5124]: I0126 00:11:10.677026 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:11:10 crc kubenswrapper[5124]: I0126 00:11:10.690374 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:11:10 crc kubenswrapper[5124]: I0126 00:11:10.741419 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:11:10 crc kubenswrapper[5124]: I0126 00:11:10.870454 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:11:10 crc kubenswrapper[5124]: I0126 00:11:10.881902 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:11:11 crc kubenswrapper[5124]: I0126 00:11:11.698144 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nkp7h" podUID="433ef7d9-9310-4fac-9271-fa7143485c0b" containerName="registry-server" probeResult="failure" output=< Jan 26 00:11:11 crc kubenswrapper[5124]: timeout: failed to connect service ":50051" within 1s Jan 26 00:11:11 crc kubenswrapper[5124]: > Jan 26 00:11:11 crc kubenswrapper[5124]: I0126 00:11:11.715849 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:11:11 crc kubenswrapper[5124]: I0126 00:11:11.715920 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:11:11 crc kubenswrapper[5124]: I0126 00:11:11.766433 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:11:11 crc kubenswrapper[5124]: I0126 00:11:11.942961 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rmhkx"] Jan 26 00:11:12 crc kubenswrapper[5124]: I0126 00:11:12.122392 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:11:12 crc kubenswrapper[5124]: I0126 00:11:12.122458 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:11:12 crc kubenswrapper[5124]: I0126 00:11:12.165413 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:11:12 crc kubenswrapper[5124]: I0126 00:11:12.565075 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:11:12 crc kubenswrapper[5124]: I0126 00:11:12.874638 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:11:13 crc kubenswrapper[5124]: I0126 00:11:13.842371 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rmhkx" podUID="af75a02f-4678-4afa-a8c2-acaddf134bc4" containerName="registry-server" containerID="cri-o://3699ec5e26d03697e3fb3bebb29d4f6cd73bd96ca510a4692216c21325bd7bdd" gracePeriod=2 Jan 26 00:11:14 crc kubenswrapper[5124]: I0126 00:11:14.148769 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4qj8"] Jan 26 00:11:14 crc kubenswrapper[5124]: I0126 00:11:14.847152 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t4qj8" podUID="e73a9b84-fd97-46e5-a51c-8f4ca069c13b" containerName="registry-server" containerID="cri-o://514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150" gracePeriod=2 Jan 26 00:11:15 crc kubenswrapper[5124]: E0126 00:11:15.174923 5124 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:11:15 crc kubenswrapper[5124]: E0126 00:11:15.176364 5124 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:11:15 crc kubenswrapper[5124]: E0126 00:11:15.178058 5124 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:11:15 crc kubenswrapper[5124]: E0126 00:11:15.178124 5124 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" podUID="a69d5905-85d8-49b8-ab54-15fc8f104c31" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:11:15 crc kubenswrapper[5124]: I0126 00:11:15.483404 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:16 crc kubenswrapper[5124]: I0126 00:11:16.859282 5124 generic.go:358] "Generic (PLEG): container finished" podID="af75a02f-4678-4afa-a8c2-acaddf134bc4" containerID="3699ec5e26d03697e3fb3bebb29d4f6cd73bd96ca510a4692216c21325bd7bdd" exitCode=0 Jan 26 00:11:16 crc kubenswrapper[5124]: I0126 00:11:16.859350 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmhkx" event={"ID":"af75a02f-4678-4afa-a8c2-acaddf134bc4","Type":"ContainerDied","Data":"3699ec5e26d03697e3fb3bebb29d4f6cd73bd96ca510a4692216c21325bd7bdd"} Jan 26 00:11:16 crc kubenswrapper[5124]: I0126 00:11:16.883643 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:11:20 crc kubenswrapper[5124]: I0126 00:11:20.477082 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kpn7g" Jan 26 00:11:20 crc kubenswrapper[5124]: I0126 00:11:20.682182 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:11:20 crc kubenswrapper[5124]: I0126 00:11:20.717012 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:11:20 crc kubenswrapper[5124]: E0126 00:11:20.829039 5124 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3699ec5e26d03697e3fb3bebb29d4f6cd73bd96ca510a4692216c21325bd7bdd is running failed: container process not found" containerID="3699ec5e26d03697e3fb3bebb29d4f6cd73bd96ca510a4692216c21325bd7bdd" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:11:20 crc kubenswrapper[5124]: E0126 00:11:20.829849 5124 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3699ec5e26d03697e3fb3bebb29d4f6cd73bd96ca510a4692216c21325bd7bdd is running failed: container process not found" containerID="3699ec5e26d03697e3fb3bebb29d4f6cd73bd96ca510a4692216c21325bd7bdd" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:11:20 crc kubenswrapper[5124]: E0126 00:11:20.830145 5124 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3699ec5e26d03697e3fb3bebb29d4f6cd73bd96ca510a4692216c21325bd7bdd is running failed: container process not found" containerID="3699ec5e26d03697e3fb3bebb29d4f6cd73bd96ca510a4692216c21325bd7bdd" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:11:20 crc kubenswrapper[5124]: E0126 00:11:20.830241 5124 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3699ec5e26d03697e3fb3bebb29d4f6cd73bd96ca510a4692216c21325bd7bdd is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-rmhkx" podUID="af75a02f-4678-4afa-a8c2-acaddf134bc4" containerName="registry-server" probeResult="unknown" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.494668 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vp4mw_a69d5905-85d8-49b8-ab54-15fc8f104c31/kube-multus-additional-cni-plugins/0.log" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.495074 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.652007 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.664473 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a69d5905-85d8-49b8-ab54-15fc8f104c31-ready\") pod \"a69d5905-85d8-49b8-ab54-15fc8f104c31\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.666153 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a69d5905-85d8-49b8-ab54-15fc8f104c31-ready" (OuterVolumeSpecName: "ready") pod "a69d5905-85d8-49b8-ab54-15fc8f104c31" (UID: "a69d5905-85d8-49b8-ab54-15fc8f104c31"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.669956 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a69d5905-85d8-49b8-ab54-15fc8f104c31-cni-sysctl-allowlist\") pod \"a69d5905-85d8-49b8-ab54-15fc8f104c31\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.670021 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxlwg\" (UniqueName: \"kubernetes.io/projected/a69d5905-85d8-49b8-ab54-15fc8f104c31-kube-api-access-fxlwg\") pod \"a69d5905-85d8-49b8-ab54-15fc8f104c31\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.670080 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a69d5905-85d8-49b8-ab54-15fc8f104c31-tuning-conf-dir\") pod \"a69d5905-85d8-49b8-ab54-15fc8f104c31\" (UID: \"a69d5905-85d8-49b8-ab54-15fc8f104c31\") " Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.670522 5124 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a69d5905-85d8-49b8-ab54-15fc8f104c31-ready\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.670559 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a69d5905-85d8-49b8-ab54-15fc8f104c31-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "a69d5905-85d8-49b8-ab54-15fc8f104c31" (UID: "a69d5905-85d8-49b8-ab54-15fc8f104c31"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.670644 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a69d5905-85d8-49b8-ab54-15fc8f104c31-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "a69d5905-85d8-49b8-ab54-15fc8f104c31" (UID: "a69d5905-85d8-49b8-ab54-15fc8f104c31"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.682126 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a69d5905-85d8-49b8-ab54-15fc8f104c31-kube-api-access-fxlwg" (OuterVolumeSpecName: "kube-api-access-fxlwg") pod "a69d5905-85d8-49b8-ab54-15fc8f104c31" (UID: "a69d5905-85d8-49b8-ab54-15fc8f104c31"). InnerVolumeSpecName "kube-api-access-fxlwg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.716711 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.771819 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af75a02f-4678-4afa-a8c2-acaddf134bc4-utilities\") pod \"af75a02f-4678-4afa-a8c2-acaddf134bc4\" (UID: \"af75a02f-4678-4afa-a8c2-acaddf134bc4\") " Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.771951 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af75a02f-4678-4afa-a8c2-acaddf134bc4-catalog-content\") pod \"af75a02f-4678-4afa-a8c2-acaddf134bc4\" (UID: \"af75a02f-4678-4afa-a8c2-acaddf134bc4\") " Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.772004 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w5n2\" (UniqueName: \"kubernetes.io/projected/af75a02f-4678-4afa-a8c2-acaddf134bc4-kube-api-access-4w5n2\") pod \"af75a02f-4678-4afa-a8c2-acaddf134bc4\" (UID: \"af75a02f-4678-4afa-a8c2-acaddf134bc4\") " Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.772249 5124 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a69d5905-85d8-49b8-ab54-15fc8f104c31-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.772271 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fxlwg\" (UniqueName: \"kubernetes.io/projected/a69d5905-85d8-49b8-ab54-15fc8f104c31-kube-api-access-fxlwg\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.772327 5124 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a69d5905-85d8-49b8-ab54-15fc8f104c31-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.773239 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af75a02f-4678-4afa-a8c2-acaddf134bc4-utilities" (OuterVolumeSpecName: "utilities") pod "af75a02f-4678-4afa-a8c2-acaddf134bc4" (UID: "af75a02f-4678-4afa-a8c2-acaddf134bc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.777465 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af75a02f-4678-4afa-a8c2-acaddf134bc4-kube-api-access-4w5n2" (OuterVolumeSpecName: "kube-api-access-4w5n2") pod "af75a02f-4678-4afa-a8c2-acaddf134bc4" (UID: "af75a02f-4678-4afa-a8c2-acaddf134bc4"). InnerVolumeSpecName "kube-api-access-4w5n2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.816241 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af75a02f-4678-4afa-a8c2-acaddf134bc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af75a02f-4678-4afa-a8c2-acaddf134bc4" (UID: "af75a02f-4678-4afa-a8c2-acaddf134bc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.873247 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-catalog-content\") pod \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\" (UID: \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\") " Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.873387 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-utilities\") pod \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\" (UID: \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\") " Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.873435 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mf2w\" (UniqueName: \"kubernetes.io/projected/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-kube-api-access-6mf2w\") pod \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\" (UID: \"e73a9b84-fd97-46e5-a51c-8f4ca069c13b\") " Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.873726 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af75a02f-4678-4afa-a8c2-acaddf134bc4-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.873749 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af75a02f-4678-4afa-a8c2-acaddf134bc4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.873785 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4w5n2\" (UniqueName: \"kubernetes.io/projected/af75a02f-4678-4afa-a8c2-acaddf134bc4-kube-api-access-4w5n2\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.874775 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-utilities" (OuterVolumeSpecName: "utilities") pod "e73a9b84-fd97-46e5-a51c-8f4ca069c13b" (UID: "e73a9b84-fd97-46e5-a51c-8f4ca069c13b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.885732 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-kube-api-access-6mf2w" (OuterVolumeSpecName: "kube-api-access-6mf2w") pod "e73a9b84-fd97-46e5-a51c-8f4ca069c13b" (UID: "e73a9b84-fd97-46e5-a51c-8f4ca069c13b"). InnerVolumeSpecName "kube-api-access-6mf2w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.890929 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e73a9b84-fd97-46e5-a51c-8f4ca069c13b" (UID: "e73a9b84-fd97-46e5-a51c-8f4ca069c13b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.893387 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m58f" event={"ID":"beb215dd-478e-4b23-b77c-5e741e026932","Type":"ContainerStarted","Data":"585ad95565c25b69404f055f5952485511d97b01057417249e6e093bd69de12b"} Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.894687 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vp4mw_a69d5905-85d8-49b8-ab54-15fc8f104c31/kube-multus-additional-cni-plugins/0.log" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.894726 5124 generic.go:358] "Generic (PLEG): container finished" podID="a69d5905-85d8-49b8-ab54-15fc8f104c31" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" exitCode=137 Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.894871 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.894767 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" event={"ID":"a69d5905-85d8-49b8-ab54-15fc8f104c31","Type":"ContainerDied","Data":"589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622"} Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.895057 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vp4mw" event={"ID":"a69d5905-85d8-49b8-ab54-15fc8f104c31","Type":"ContainerDied","Data":"9ee7c537f0f7b0f4d50b6ab82b73cce7de07712da1cc09804a170281d899f9b9"} Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.895102 5124 scope.go:117] "RemoveContainer" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.896902 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbcq8" event={"ID":"9075b91b-c638-4c64-95b7-1c58a6e5b132","Type":"ContainerStarted","Data":"d3976c025e1bf8b0058fa5c5281a3c0e35cfaaa0d83de77f07b8d7cb9c52c50b"} Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.898894 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmhkx" event={"ID":"af75a02f-4678-4afa-a8c2-acaddf134bc4","Type":"ContainerDied","Data":"719acc09216e8307e71e25002684075bd3deb615a21ab323e1ca1bb2b70e25fe"} Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.898951 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmhkx" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.903420 5124 generic.go:358] "Generic (PLEG): container finished" podID="e73a9b84-fd97-46e5-a51c-8f4ca069c13b" containerID="514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150" exitCode=0 Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.903463 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4qj8" event={"ID":"e73a9b84-fd97-46e5-a51c-8f4ca069c13b","Type":"ContainerDied","Data":"514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150"} Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.903501 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4qj8" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.903521 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4qj8" event={"ID":"e73a9b84-fd97-46e5-a51c-8f4ca069c13b","Type":"ContainerDied","Data":"3593d7b608d712d73af5b8735b179064d6be4a9a2ef6be50226bc224c55dc29e"} Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.932227 5124 scope.go:117] "RemoveContainer" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" Jan 26 00:11:22 crc kubenswrapper[5124]: E0126 00:11:22.932513 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622\": container with ID starting with 589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622 not found: ID does not exist" containerID="589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.932539 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622"} err="failed to get container status \"589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622\": rpc error: code = NotFound desc = could not find container \"589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622\": container with ID starting with 589a3be92836dc1a24ad9d394ab7344448ca60f4dd548fd16fd8668afb470622 not found: ID does not exist" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.932568 5124 scope.go:117] "RemoveContainer" containerID="3699ec5e26d03697e3fb3bebb29d4f6cd73bd96ca510a4692216c21325bd7bdd" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.954231 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vp4mw"] Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.962711 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vp4mw"] Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.966608 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nkp7h"] Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.966951 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nkp7h" podUID="433ef7d9-9310-4fac-9271-fa7143485c0b" containerName="registry-server" containerID="cri-o://cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86" gracePeriod=2 Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.970492 5124 scope.go:117] "RemoveContainer" containerID="5255f303ab49582b90357971883d487ca18ae223be49dfd4b69be669f57504bf" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.972149 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rmhkx"] Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.975351 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.975385 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6mf2w\" (UniqueName: \"kubernetes.io/projected/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-kube-api-access-6mf2w\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.975396 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73a9b84-fd97-46e5-a51c-8f4ca069c13b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.977770 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rmhkx"] Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.980991 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4qj8"] Jan 26 00:11:22 crc kubenswrapper[5124]: I0126 00:11:22.983350 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4qj8"] Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.005459 5124 scope.go:117] "RemoveContainer" containerID="f4d2744a8a3edc6305a2e1e8b3ea9f7a05ca48dec7024215f746a7b2eb61fe3d" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.147066 5124 scope.go:117] "RemoveContainer" containerID="514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.169707 5124 scope.go:117] "RemoveContainer" containerID="343b521a008186a07c5214e473c9e4646c1715392f3b44b70553210abbb38c1c" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.242208 5124 scope.go:117] "RemoveContainer" containerID="79b75c06745c3bd4f5a9764a315cc46606fc2286965860de374f9cb012c20f06" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.296708 5124 scope.go:117] "RemoveContainer" containerID="514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150" Jan 26 00:11:23 crc kubenswrapper[5124]: E0126 00:11:23.297434 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150\": container with ID starting with 514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150 not found: ID does not exist" containerID="514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.297536 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150"} err="failed to get container status \"514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150\": rpc error: code = NotFound desc = could not find container \"514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150\": container with ID starting with 514e72e6c744d0e835e2ced78ac51c68e0ec35056d7f2be8f13dde77f1886150 not found: ID does not exist" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.297632 5124 scope.go:117] "RemoveContainer" containerID="343b521a008186a07c5214e473c9e4646c1715392f3b44b70553210abbb38c1c" Jan 26 00:11:23 crc kubenswrapper[5124]: E0126 00:11:23.297931 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"343b521a008186a07c5214e473c9e4646c1715392f3b44b70553210abbb38c1c\": container with ID starting with 343b521a008186a07c5214e473c9e4646c1715392f3b44b70553210abbb38c1c not found: ID does not exist" containerID="343b521a008186a07c5214e473c9e4646c1715392f3b44b70553210abbb38c1c" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.298023 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"343b521a008186a07c5214e473c9e4646c1715392f3b44b70553210abbb38c1c"} err="failed to get container status \"343b521a008186a07c5214e473c9e4646c1715392f3b44b70553210abbb38c1c\": rpc error: code = NotFound desc = could not find container \"343b521a008186a07c5214e473c9e4646c1715392f3b44b70553210abbb38c1c\": container with ID starting with 343b521a008186a07c5214e473c9e4646c1715392f3b44b70553210abbb38c1c not found: ID does not exist" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.298097 5124 scope.go:117] "RemoveContainer" containerID="79b75c06745c3bd4f5a9764a315cc46606fc2286965860de374f9cb012c20f06" Jan 26 00:11:23 crc kubenswrapper[5124]: E0126 00:11:23.298338 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b75c06745c3bd4f5a9764a315cc46606fc2286965860de374f9cb012c20f06\": container with ID starting with 79b75c06745c3bd4f5a9764a315cc46606fc2286965860de374f9cb012c20f06 not found: ID does not exist" containerID="79b75c06745c3bd4f5a9764a315cc46606fc2286965860de374f9cb012c20f06" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.298429 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b75c06745c3bd4f5a9764a315cc46606fc2286965860de374f9cb012c20f06"} err="failed to get container status \"79b75c06745c3bd4f5a9764a315cc46606fc2286965860de374f9cb012c20f06\": rpc error: code = NotFound desc = could not find container \"79b75c06745c3bd4f5a9764a315cc46606fc2286965860de374f9cb012c20f06\": container with ID starting with 79b75c06745c3bd4f5a9764a315cc46606fc2286965860de374f9cb012c20f06 not found: ID does not exist" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.632151 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.790992 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/433ef7d9-9310-4fac-9271-fa7143485c0b-catalog-content\") pod \"433ef7d9-9310-4fac-9271-fa7143485c0b\" (UID: \"433ef7d9-9310-4fac-9271-fa7143485c0b\") " Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.791072 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlcx8\" (UniqueName: \"kubernetes.io/projected/433ef7d9-9310-4fac-9271-fa7143485c0b-kube-api-access-xlcx8\") pod \"433ef7d9-9310-4fac-9271-fa7143485c0b\" (UID: \"433ef7d9-9310-4fac-9271-fa7143485c0b\") " Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.791129 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/433ef7d9-9310-4fac-9271-fa7143485c0b-utilities\") pod \"433ef7d9-9310-4fac-9271-fa7143485c0b\" (UID: \"433ef7d9-9310-4fac-9271-fa7143485c0b\") " Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.792176 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/433ef7d9-9310-4fac-9271-fa7143485c0b-utilities" (OuterVolumeSpecName: "utilities") pod "433ef7d9-9310-4fac-9271-fa7143485c0b" (UID: "433ef7d9-9310-4fac-9271-fa7143485c0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.816197 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/433ef7d9-9310-4fac-9271-fa7143485c0b-kube-api-access-xlcx8" (OuterVolumeSpecName: "kube-api-access-xlcx8") pod "433ef7d9-9310-4fac-9271-fa7143485c0b" (UID: "433ef7d9-9310-4fac-9271-fa7143485c0b"). InnerVolumeSpecName "kube-api-access-xlcx8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.831976 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/433ef7d9-9310-4fac-9271-fa7143485c0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "433ef7d9-9310-4fac-9271-fa7143485c0b" (UID: "433ef7d9-9310-4fac-9271-fa7143485c0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.892618 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/433ef7d9-9310-4fac-9271-fa7143485c0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.892656 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xlcx8\" (UniqueName: \"kubernetes.io/projected/433ef7d9-9310-4fac-9271-fa7143485c0b-kube-api-access-xlcx8\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.892669 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/433ef7d9-9310-4fac-9271-fa7143485c0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.911053 5124 generic.go:358] "Generic (PLEG): container finished" podID="9075b91b-c638-4c64-95b7-1c58a6e5b132" containerID="d3976c025e1bf8b0058fa5c5281a3c0e35cfaaa0d83de77f07b8d7cb9c52c50b" exitCode=0 Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.911180 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbcq8" event={"ID":"9075b91b-c638-4c64-95b7-1c58a6e5b132","Type":"ContainerDied","Data":"d3976c025e1bf8b0058fa5c5281a3c0e35cfaaa0d83de77f07b8d7cb9c52c50b"} Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.913000 5124 generic.go:358] "Generic (PLEG): container finished" podID="433ef7d9-9310-4fac-9271-fa7143485c0b" containerID="cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86" exitCode=0 Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.913120 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkp7h" event={"ID":"433ef7d9-9310-4fac-9271-fa7143485c0b","Type":"ContainerDied","Data":"cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86"} Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.913136 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkp7h" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.913167 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkp7h" event={"ID":"433ef7d9-9310-4fac-9271-fa7143485c0b","Type":"ContainerDied","Data":"42c8c90f05e9d0113ecc8733bf9991d3c2507f7dd97ec1ab72fed67f71a3e7c2"} Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.913190 5124 scope.go:117] "RemoveContainer" containerID="cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.947119 5124 scope.go:117] "RemoveContainer" containerID="0cb9c82e5635ddf7f2ffd0484dedb5c41bde17c135cd11d54ccbad42e73e2f2a" Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.953728 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nkp7h"] Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.961930 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nkp7h"] Jan 26 00:11:23 crc kubenswrapper[5124]: I0126 00:11:23.975681 5124 scope.go:117] "RemoveContainer" containerID="606707c67d893b97146fa8f78c4895e56a419fb8f503bc6e14e90a46cf97f59c" Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.071992 5124 scope.go:117] "RemoveContainer" containerID="cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86" Jan 26 00:11:24 crc kubenswrapper[5124]: E0126 00:11:24.072552 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86\": container with ID starting with cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86 not found: ID does not exist" containerID="cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86" Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.072613 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86"} err="failed to get container status \"cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86\": rpc error: code = NotFound desc = could not find container \"cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86\": container with ID starting with cb93442960e77f6d04f9aa7a35e94462e9557163d6a400da6cac1497427b5b86 not found: ID does not exist" Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.072666 5124 scope.go:117] "RemoveContainer" containerID="0cb9c82e5635ddf7f2ffd0484dedb5c41bde17c135cd11d54ccbad42e73e2f2a" Jan 26 00:11:24 crc kubenswrapper[5124]: E0126 00:11:24.072949 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb9c82e5635ddf7f2ffd0484dedb5c41bde17c135cd11d54ccbad42e73e2f2a\": container with ID starting with 0cb9c82e5635ddf7f2ffd0484dedb5c41bde17c135cd11d54ccbad42e73e2f2a not found: ID does not exist" containerID="0cb9c82e5635ddf7f2ffd0484dedb5c41bde17c135cd11d54ccbad42e73e2f2a" Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.072991 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb9c82e5635ddf7f2ffd0484dedb5c41bde17c135cd11d54ccbad42e73e2f2a"} err="failed to get container status \"0cb9c82e5635ddf7f2ffd0484dedb5c41bde17c135cd11d54ccbad42e73e2f2a\": rpc error: code = NotFound desc = could not find container \"0cb9c82e5635ddf7f2ffd0484dedb5c41bde17c135cd11d54ccbad42e73e2f2a\": container with ID starting with 0cb9c82e5635ddf7f2ffd0484dedb5c41bde17c135cd11d54ccbad42e73e2f2a not found: ID does not exist" Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.073017 5124 scope.go:117] "RemoveContainer" containerID="606707c67d893b97146fa8f78c4895e56a419fb8f503bc6e14e90a46cf97f59c" Jan 26 00:11:24 crc kubenswrapper[5124]: E0126 00:11:24.073257 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"606707c67d893b97146fa8f78c4895e56a419fb8f503bc6e14e90a46cf97f59c\": container with ID starting with 606707c67d893b97146fa8f78c4895e56a419fb8f503bc6e14e90a46cf97f59c not found: ID does not exist" containerID="606707c67d893b97146fa8f78c4895e56a419fb8f503bc6e14e90a46cf97f59c" Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.073281 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"606707c67d893b97146fa8f78c4895e56a419fb8f503bc6e14e90a46cf97f59c"} err="failed to get container status \"606707c67d893b97146fa8f78c4895e56a419fb8f503bc6e14e90a46cf97f59c\": rpc error: code = NotFound desc = could not find container \"606707c67d893b97146fa8f78c4895e56a419fb8f503bc6e14e90a46cf97f59c\": container with ID starting with 606707c67d893b97146fa8f78c4895e56a419fb8f503bc6e14e90a46cf97f59c not found: ID does not exist" Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.371459 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="433ef7d9-9310-4fac-9271-fa7143485c0b" path="/var/lib/kubelet/pods/433ef7d9-9310-4fac-9271-fa7143485c0b/volumes" Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.373095 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a69d5905-85d8-49b8-ab54-15fc8f104c31" path="/var/lib/kubelet/pods/a69d5905-85d8-49b8-ab54-15fc8f104c31/volumes" Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.373654 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af75a02f-4678-4afa-a8c2-acaddf134bc4" path="/var/lib/kubelet/pods/af75a02f-4678-4afa-a8c2-acaddf134bc4/volumes" Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.374804 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e73a9b84-fd97-46e5-a51c-8f4ca069c13b" path="/var/lib/kubelet/pods/e73a9b84-fd97-46e5-a51c-8f4ca069c13b/volumes" Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.924425 5124 generic.go:358] "Generic (PLEG): container finished" podID="beb215dd-478e-4b23-b77c-5e741e026932" containerID="585ad95565c25b69404f055f5952485511d97b01057417249e6e093bd69de12b" exitCode=0 Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.924514 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m58f" event={"ID":"beb215dd-478e-4b23-b77c-5e741e026932","Type":"ContainerDied","Data":"585ad95565c25b69404f055f5952485511d97b01057417249e6e093bd69de12b"} Jan 26 00:11:24 crc kubenswrapper[5124]: I0126 00:11:24.929446 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbcq8" event={"ID":"9075b91b-c638-4c64-95b7-1c58a6e5b132","Type":"ContainerStarted","Data":"d4b54b7f574f4d8ddfccef611e50f70b3c5b0afb24f8a29086aa6e225b45a708"} Jan 26 00:11:25 crc kubenswrapper[5124]: I0126 00:11:25.337500 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hbcq8" podStartSLOduration=5.528958525 podStartE2EDuration="33.337486378s" podCreationTimestamp="2026-01-26 00:10:52 +0000 UTC" firstStartedPulling="2026-01-26 00:10:54.639877828 +0000 UTC m=+132.548797177" lastFinishedPulling="2026-01-26 00:11:22.448405681 +0000 UTC m=+160.357325030" observedRunningTime="2026-01-26 00:11:25.334835756 +0000 UTC m=+163.243755115" watchObservedRunningTime="2026-01-26 00:11:25.337486378 +0000 UTC m=+163.246405727" Jan 26 00:11:26 crc kubenswrapper[5124]: I0126 00:11:26.941546 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m58f" event={"ID":"beb215dd-478e-4b23-b77c-5e741e026932","Type":"ContainerStarted","Data":"d636d8c930a6ef7a4d8bca6d30375e240339be66dd74a2341d580b7a669d96e8"} Jan 26 00:11:26 crc kubenswrapper[5124]: I0126 00:11:26.960445 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7m58f" podStartSLOduration=7.136379638 podStartE2EDuration="34.960428378s" podCreationTimestamp="2026-01-26 00:10:52 +0000 UTC" firstStartedPulling="2026-01-26 00:10:54.636284463 +0000 UTC m=+132.545203812" lastFinishedPulling="2026-01-26 00:11:22.460333193 +0000 UTC m=+160.369252552" observedRunningTime="2026-01-26 00:11:26.957622502 +0000 UTC m=+164.866541881" watchObservedRunningTime="2026-01-26 00:11:26.960428378 +0000 UTC m=+164.869347727" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.929099 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930177 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01e8a3d0-83cc-43b6-b028-70688cd1c706" containerName="pruner" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930190 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e8a3d0-83cc-43b6-b028-70688cd1c706" containerName="pruner" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930198 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af75a02f-4678-4afa-a8c2-acaddf134bc4" containerName="registry-server" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930204 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="af75a02f-4678-4afa-a8c2-acaddf134bc4" containerName="registry-server" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930226 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af75a02f-4678-4afa-a8c2-acaddf134bc4" containerName="extract-content" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930231 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="af75a02f-4678-4afa-a8c2-acaddf134bc4" containerName="extract-content" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930242 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af75a02f-4678-4afa-a8c2-acaddf134bc4" containerName="extract-utilities" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930249 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="af75a02f-4678-4afa-a8c2-acaddf134bc4" containerName="extract-utilities" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930255 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e73a9b84-fd97-46e5-a51c-8f4ca069c13b" containerName="extract-utilities" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930260 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73a9b84-fd97-46e5-a51c-8f4ca069c13b" containerName="extract-utilities" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930271 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="433ef7d9-9310-4fac-9271-fa7143485c0b" containerName="extract-utilities" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930276 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="433ef7d9-9310-4fac-9271-fa7143485c0b" containerName="extract-utilities" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930285 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="433ef7d9-9310-4fac-9271-fa7143485c0b" containerName="registry-server" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930290 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="433ef7d9-9310-4fac-9271-fa7143485c0b" containerName="registry-server" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930297 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e73a9b84-fd97-46e5-a51c-8f4ca069c13b" containerName="extract-content" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930302 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73a9b84-fd97-46e5-a51c-8f4ca069c13b" containerName="extract-content" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930310 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="433ef7d9-9310-4fac-9271-fa7143485c0b" containerName="extract-content" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930316 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="433ef7d9-9310-4fac-9271-fa7143485c0b" containerName="extract-content" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930325 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a69d5905-85d8-49b8-ab54-15fc8f104c31" containerName="kube-multus-additional-cni-plugins" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930331 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="a69d5905-85d8-49b8-ab54-15fc8f104c31" containerName="kube-multus-additional-cni-plugins" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930338 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e73a9b84-fd97-46e5-a51c-8f4ca069c13b" containerName="registry-server" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930343 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73a9b84-fd97-46e5-a51c-8f4ca069c13b" containerName="registry-server" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930423 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="a69d5905-85d8-49b8-ab54-15fc8f104c31" containerName="kube-multus-additional-cni-plugins" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930436 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="433ef7d9-9310-4fac-9271-fa7143485c0b" containerName="registry-server" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930445 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="01e8a3d0-83cc-43b6-b028-70688cd1c706" containerName="pruner" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930453 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="af75a02f-4678-4afa-a8c2-acaddf134bc4" containerName="registry-server" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.930460 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="e73a9b84-fd97-46e5-a51c-8f4ca069c13b" containerName="registry-server" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.933661 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.937040 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.938413 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.940087 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.970775 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1011d335-142f-4db9-bc49-2bd3caadc053-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"1011d335-142f-4db9-bc49-2bd3caadc053\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:11:29 crc kubenswrapper[5124]: I0126 00:11:29.970824 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1011d335-142f-4db9-bc49-2bd3caadc053-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"1011d335-142f-4db9-bc49-2bd3caadc053\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:11:30 crc kubenswrapper[5124]: I0126 00:11:30.072261 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1011d335-142f-4db9-bc49-2bd3caadc053-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"1011d335-142f-4db9-bc49-2bd3caadc053\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:11:30 crc kubenswrapper[5124]: I0126 00:11:30.072323 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1011d335-142f-4db9-bc49-2bd3caadc053-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"1011d335-142f-4db9-bc49-2bd3caadc053\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:11:30 crc kubenswrapper[5124]: I0126 00:11:30.072447 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1011d335-142f-4db9-bc49-2bd3caadc053-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"1011d335-142f-4db9-bc49-2bd3caadc053\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:11:30 crc kubenswrapper[5124]: I0126 00:11:30.097300 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1011d335-142f-4db9-bc49-2bd3caadc053-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"1011d335-142f-4db9-bc49-2bd3caadc053\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:11:30 crc kubenswrapper[5124]: I0126 00:11:30.259160 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:11:30 crc kubenswrapper[5124]: I0126 00:11:30.659916 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 26 00:11:30 crc kubenswrapper[5124]: I0126 00:11:30.978947 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"1011d335-142f-4db9-bc49-2bd3caadc053","Type":"ContainerStarted","Data":"268d54538638e96f2a61bbab82f1e2c8a78b2deca2480dd1705af87d4397b533"} Jan 26 00:11:31 crc kubenswrapper[5124]: I0126 00:11:31.143066 5124 ???:1] "http: TLS handshake error from 192.168.126.11:51808: no serving certificate available for the kubelet" Jan 26 00:11:31 crc kubenswrapper[5124]: I0126 00:11:31.986921 5124 generic.go:358] "Generic (PLEG): container finished" podID="1011d335-142f-4db9-bc49-2bd3caadc053" containerID="7d425c8493db5c0d28a5a920fb899196c14471a856d84c545e3d17822fdd7982" exitCode=0 Jan 26 00:11:31 crc kubenswrapper[5124]: I0126 00:11:31.986978 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"1011d335-142f-4db9-bc49-2bd3caadc053","Type":"ContainerDied","Data":"7d425c8493db5c0d28a5a920fb899196c14471a856d84c545e3d17822fdd7982"} Jan 26 00:11:32 crc kubenswrapper[5124]: I0126 00:11:32.961055 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:11:32 crc kubenswrapper[5124]: I0126 00:11:32.961405 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:11:33 crc kubenswrapper[5124]: I0126 00:11:33.009468 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:11:33 crc kubenswrapper[5124]: I0126 00:11:33.045032 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:11:33 crc kubenswrapper[5124]: I0126 00:11:33.307408 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:11:33 crc kubenswrapper[5124]: I0126 00:11:33.307789 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:11:33 crc kubenswrapper[5124]: I0126 00:11:33.309344 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:11:33 crc kubenswrapper[5124]: I0126 00:11:33.354268 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:11:33 crc kubenswrapper[5124]: I0126 00:11:33.409840 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1011d335-142f-4db9-bc49-2bd3caadc053-kubelet-dir\") pod \"1011d335-142f-4db9-bc49-2bd3caadc053\" (UID: \"1011d335-142f-4db9-bc49-2bd3caadc053\") " Jan 26 00:11:33 crc kubenswrapper[5124]: I0126 00:11:33.410019 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1011d335-142f-4db9-bc49-2bd3caadc053-kube-api-access\") pod \"1011d335-142f-4db9-bc49-2bd3caadc053\" (UID: \"1011d335-142f-4db9-bc49-2bd3caadc053\") " Jan 26 00:11:33 crc kubenswrapper[5124]: I0126 00:11:33.410285 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1011d335-142f-4db9-bc49-2bd3caadc053-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1011d335-142f-4db9-bc49-2bd3caadc053" (UID: "1011d335-142f-4db9-bc49-2bd3caadc053"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:11:33 crc kubenswrapper[5124]: I0126 00:11:33.415741 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1011d335-142f-4db9-bc49-2bd3caadc053-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1011d335-142f-4db9-bc49-2bd3caadc053" (UID: "1011d335-142f-4db9-bc49-2bd3caadc053"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:33 crc kubenswrapper[5124]: I0126 00:11:33.511697 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1011d335-142f-4db9-bc49-2bd3caadc053-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:33 crc kubenswrapper[5124]: I0126 00:11:33.511728 5124 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1011d335-142f-4db9-bc49-2bd3caadc053-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:34 crc kubenswrapper[5124]: I0126 00:11:34.000281 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:11:34 crc kubenswrapper[5124]: I0126 00:11:34.000354 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"1011d335-142f-4db9-bc49-2bd3caadc053","Type":"ContainerDied","Data":"268d54538638e96f2a61bbab82f1e2c8a78b2deca2480dd1705af87d4397b533"} Jan 26 00:11:34 crc kubenswrapper[5124]: I0126 00:11:34.000407 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="268d54538638e96f2a61bbab82f1e2c8a78b2deca2480dd1705af87d4397b533" Jan 26 00:11:34 crc kubenswrapper[5124]: I0126 00:11:34.034887 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:11:34 crc kubenswrapper[5124]: I0126 00:11:34.344759 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hbcq8"] Jan 26 00:11:35 crc kubenswrapper[5124]: I0126 00:11:35.929035 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 26 00:11:35 crc kubenswrapper[5124]: I0126 00:11:35.929556 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1011d335-142f-4db9-bc49-2bd3caadc053" containerName="pruner" Jan 26 00:11:35 crc kubenswrapper[5124]: I0126 00:11:35.929568 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="1011d335-142f-4db9-bc49-2bd3caadc053" containerName="pruner" Jan 26 00:11:35 crc kubenswrapper[5124]: I0126 00:11:35.929665 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="1011d335-142f-4db9-bc49-2bd3caadc053" containerName="pruner" Jan 26 00:11:36 crc kubenswrapper[5124]: I0126 00:11:36.894111 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:11:36 crc kubenswrapper[5124]: I0126 00:11:36.894281 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hbcq8" podUID="9075b91b-c638-4c64-95b7-1c58a6e5b132" containerName="registry-server" containerID="cri-o://d4b54b7f574f4d8ddfccef611e50f70b3c5b0afb24f8a29086aa6e225b45a708" gracePeriod=2 Jan 26 00:11:36 crc kubenswrapper[5124]: I0126 00:11:36.896296 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 26 00:11:36 crc kubenswrapper[5124]: I0126 00:11:36.897354 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:11:36 crc kubenswrapper[5124]: I0126 00:11:36.903988 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 26 00:11:36 crc kubenswrapper[5124]: I0126 00:11:36.969937 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3d95296-9ae1-4722-9d5d-bdd64e912859-var-lock\") pod \"installer-12-crc\" (UID: \"a3d95296-9ae1-4722-9d5d-bdd64e912859\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:11:36 crc kubenswrapper[5124]: I0126 00:11:36.970015 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d95296-9ae1-4722-9d5d-bdd64e912859-kube-api-access\") pod \"installer-12-crc\" (UID: \"a3d95296-9ae1-4722-9d5d-bdd64e912859\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:11:36 crc kubenswrapper[5124]: I0126 00:11:36.970075 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3d95296-9ae1-4722-9d5d-bdd64e912859-kubelet-dir\") pod \"installer-12-crc\" (UID: \"a3d95296-9ae1-4722-9d5d-bdd64e912859\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.023359 5124 generic.go:358] "Generic (PLEG): container finished" podID="9075b91b-c638-4c64-95b7-1c58a6e5b132" containerID="d4b54b7f574f4d8ddfccef611e50f70b3c5b0afb24f8a29086aa6e225b45a708" exitCode=0 Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.023429 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbcq8" event={"ID":"9075b91b-c638-4c64-95b7-1c58a6e5b132","Type":"ContainerDied","Data":"d4b54b7f574f4d8ddfccef611e50f70b3c5b0afb24f8a29086aa6e225b45a708"} Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.071094 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3d95296-9ae1-4722-9d5d-bdd64e912859-var-lock\") pod \"installer-12-crc\" (UID: \"a3d95296-9ae1-4722-9d5d-bdd64e912859\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.071273 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d95296-9ae1-4722-9d5d-bdd64e912859-kube-api-access\") pod \"installer-12-crc\" (UID: \"a3d95296-9ae1-4722-9d5d-bdd64e912859\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.071281 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3d95296-9ae1-4722-9d5d-bdd64e912859-var-lock\") pod \"installer-12-crc\" (UID: \"a3d95296-9ae1-4722-9d5d-bdd64e912859\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.071564 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3d95296-9ae1-4722-9d5d-bdd64e912859-kubelet-dir\") pod \"installer-12-crc\" (UID: \"a3d95296-9ae1-4722-9d5d-bdd64e912859\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.071720 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3d95296-9ae1-4722-9d5d-bdd64e912859-kubelet-dir\") pod \"installer-12-crc\" (UID: \"a3d95296-9ae1-4722-9d5d-bdd64e912859\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.094187 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d95296-9ae1-4722-9d5d-bdd64e912859-kube-api-access\") pod \"installer-12-crc\" (UID: \"a3d95296-9ae1-4722-9d5d-bdd64e912859\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.210960 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.316782 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.374608 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9075b91b-c638-4c64-95b7-1c58a6e5b132-utilities\") pod \"9075b91b-c638-4c64-95b7-1c58a6e5b132\" (UID: \"9075b91b-c638-4c64-95b7-1c58a6e5b132\") " Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.374690 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9075b91b-c638-4c64-95b7-1c58a6e5b132-catalog-content\") pod \"9075b91b-c638-4c64-95b7-1c58a6e5b132\" (UID: \"9075b91b-c638-4c64-95b7-1c58a6e5b132\") " Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.374802 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft46g\" (UniqueName: \"kubernetes.io/projected/9075b91b-c638-4c64-95b7-1c58a6e5b132-kube-api-access-ft46g\") pod \"9075b91b-c638-4c64-95b7-1c58a6e5b132\" (UID: \"9075b91b-c638-4c64-95b7-1c58a6e5b132\") " Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.375654 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9075b91b-c638-4c64-95b7-1c58a6e5b132-utilities" (OuterVolumeSpecName: "utilities") pod "9075b91b-c638-4c64-95b7-1c58a6e5b132" (UID: "9075b91b-c638-4c64-95b7-1c58a6e5b132"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.380577 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9075b91b-c638-4c64-95b7-1c58a6e5b132-kube-api-access-ft46g" (OuterVolumeSpecName: "kube-api-access-ft46g") pod "9075b91b-c638-4c64-95b7-1c58a6e5b132" (UID: "9075b91b-c638-4c64-95b7-1c58a6e5b132"). InnerVolumeSpecName "kube-api-access-ft46g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.476554 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9075b91b-c638-4c64-95b7-1c58a6e5b132-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.476604 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ft46g\" (UniqueName: \"kubernetes.io/projected/9075b91b-c638-4c64-95b7-1c58a6e5b132-kube-api-access-ft46g\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.477702 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9075b91b-c638-4c64-95b7-1c58a6e5b132-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9075b91b-c638-4c64-95b7-1c58a6e5b132" (UID: "9075b91b-c638-4c64-95b7-1c58a6e5b132"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.577908 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9075b91b-c638-4c64-95b7-1c58a6e5b132-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:37 crc kubenswrapper[5124]: I0126 00:11:37.616476 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 26 00:11:38 crc kubenswrapper[5124]: I0126 00:11:38.030565 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbcq8" event={"ID":"9075b91b-c638-4c64-95b7-1c58a6e5b132","Type":"ContainerDied","Data":"d10f942128e82b50b1ec4dca89df42a296108e4a76de95a739a0a9d03377f6d2"} Jan 26 00:11:38 crc kubenswrapper[5124]: I0126 00:11:38.030806 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hbcq8" Jan 26 00:11:38 crc kubenswrapper[5124]: I0126 00:11:38.030902 5124 scope.go:117] "RemoveContainer" containerID="d4b54b7f574f4d8ddfccef611e50f70b3c5b0afb24f8a29086aa6e225b45a708" Jan 26 00:11:38 crc kubenswrapper[5124]: I0126 00:11:38.031559 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a3d95296-9ae1-4722-9d5d-bdd64e912859","Type":"ContainerStarted","Data":"bff2494ca9011639ff9b6a84ad09356f651154aaddc4a5e381f082dc1fda9513"} Jan 26 00:11:38 crc kubenswrapper[5124]: I0126 00:11:38.047896 5124 scope.go:117] "RemoveContainer" containerID="d3976c025e1bf8b0058fa5c5281a3c0e35cfaaa0d83de77f07b8d7cb9c52c50b" Jan 26 00:11:38 crc kubenswrapper[5124]: I0126 00:11:38.057478 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hbcq8"] Jan 26 00:11:38 crc kubenswrapper[5124]: I0126 00:11:38.062096 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hbcq8"] Jan 26 00:11:38 crc kubenswrapper[5124]: I0126 00:11:38.082624 5124 scope.go:117] "RemoveContainer" containerID="70e5e06288e381a3ec07580f62312e3f7a6d389ae86773648977674fac676d6f" Jan 26 00:11:38 crc kubenswrapper[5124]: I0126 00:11:38.372261 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9075b91b-c638-4c64-95b7-1c58a6e5b132" path="/var/lib/kubelet/pods/9075b91b-c638-4c64-95b7-1c58a6e5b132/volumes" Jan 26 00:11:39 crc kubenswrapper[5124]: I0126 00:11:39.040197 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a3d95296-9ae1-4722-9d5d-bdd64e912859","Type":"ContainerStarted","Data":"11550495a254dd97b4e363714ba7723d3c0c77372f9c1b9b098891b1a4d5ea31"} Jan 26 00:11:44 crc kubenswrapper[5124]: I0126 00:11:44.375050 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=9.375032293 podStartE2EDuration="9.375032293s" podCreationTimestamp="2026-01-26 00:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:39.058876841 +0000 UTC m=+176.967796200" watchObservedRunningTime="2026-01-26 00:11:44.375032293 +0000 UTC m=+182.283951642" Jan 26 00:11:44 crc kubenswrapper[5124]: I0126 00:11:44.378688 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-v5jrb"] Jan 26 00:12:09 crc kubenswrapper[5124]: I0126 00:12:09.422995 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" podUID="b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" containerName="oauth-openshift" containerID="cri-o://5550c9d24114d2b86df37d3cf1645f9455ef504a5b4c0810680a7b7c05ac758c" gracePeriod=15 Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.251181 5124 generic.go:358] "Generic (PLEG): container finished" podID="b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" containerID="5550c9d24114d2b86df37d3cf1645f9455ef504a5b4c0810680a7b7c05ac758c" exitCode=0 Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.251283 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" event={"ID":"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f","Type":"ContainerDied","Data":"5550c9d24114d2b86df37d3cf1645f9455ef504a5b4c0810680a7b7c05ac758c"} Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.422490 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.483432 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6"] Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.485406 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" containerName="oauth-openshift" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.490168 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" containerName="oauth-openshift" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.490361 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9075b91b-c638-4c64-95b7-1c58a6e5b132" containerName="registry-server" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.490460 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="9075b91b-c638-4c64-95b7-1c58a6e5b132" containerName="registry-server" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.490553 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9075b91b-c638-4c64-95b7-1c58a6e5b132" containerName="extract-content" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.490676 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="9075b91b-c638-4c64-95b7-1c58a6e5b132" containerName="extract-content" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.490807 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9075b91b-c638-4c64-95b7-1c58a6e5b132" containerName="extract-utilities" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.490899 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="9075b91b-c638-4c64-95b7-1c58a6e5b132" containerName="extract-utilities" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.491675 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" containerName="oauth-openshift" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.491764 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="9075b91b-c638-4c64-95b7-1c58a6e5b132" containerName="registry-server" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.505569 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6"] Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.505749 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.520149 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x9k8\" (UniqueName: \"kubernetes.io/projected/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-kube-api-access-8x9k8\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.520206 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-audit-policies\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.520252 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-login\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.520285 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-router-certs\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.520763 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-service-ca\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521016 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-provider-selection\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521146 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521156 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-error\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521197 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-idp-0-file-data\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521206 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521231 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-serving-cert\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521255 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-audit-dir\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521337 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-ocp-branding-template\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521366 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-cliconfig\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521384 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-session\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521399 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-trusted-ca-bundle\") pod \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\" (UID: \"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f\") " Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521499 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521806 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521863 5124 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.521878 5124 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.522177 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.522257 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.527445 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.529268 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.529477 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.530176 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.531227 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.535760 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-kube-api-access-8x9k8" (OuterVolumeSpecName: "kube-api-access-8x9k8") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "kube-api-access-8x9k8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.536832 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.538214 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.538718 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" (UID: "b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.623195 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-audit-dir\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.623243 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.623276 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.623432 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-user-template-error\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.623524 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-session\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.623558 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-user-template-login\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.623616 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-service-ca\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.623666 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.623789 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-router-certs\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.623854 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.623921 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.623959 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-audit-policies\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624040 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624064 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmdfn\" (UniqueName: \"kubernetes.io/projected/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-kube-api-access-qmdfn\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624136 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624149 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624162 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624191 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624201 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624211 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624221 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624232 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624241 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624267 5124 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.624279 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8x9k8\" (UniqueName: \"kubernetes.io/projected/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f-kube-api-access-8x9k8\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.725467 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.725530 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-audit-policies\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.725768 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.725812 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qmdfn\" (UniqueName: \"kubernetes.io/projected/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-kube-api-access-qmdfn\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.726072 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-audit-dir\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.726219 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-audit-dir\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.726286 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-audit-policies\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.726405 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.726655 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.726764 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-user-template-error\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.726888 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-session\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.726967 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-user-template-login\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.727141 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-service-ca\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.727283 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.727400 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-router-certs\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.727478 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.727812 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.727887 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-service-ca\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.728545 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.730679 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.730711 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-session\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.731211 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.731413 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-router-certs\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.731579 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-user-template-login\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.731887 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.733002 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-user-template-error\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.735569 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.742017 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmdfn\" (UniqueName: \"kubernetes.io/projected/7c7a0197-c1f8-42e2-bb22-48bae8fe5be0-kube-api-access-qmdfn\") pod \"oauth-openshift-788ff9cfc5-c2tk6\" (UID: \"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0\") " pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:10 crc kubenswrapper[5124]: I0126 00:12:10.879828 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:11 crc kubenswrapper[5124]: I0126 00:12:11.107427 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6"] Jan 26 00:12:11 crc kubenswrapper[5124]: I0126 00:12:11.258518 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" event={"ID":"b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f","Type":"ContainerDied","Data":"d1d7d8d9f9479e246d68b7bc53df457d47531d1300caa96cf2e7bca02853a139"} Jan 26 00:12:11 crc kubenswrapper[5124]: I0126 00:12:11.258550 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-v5jrb" Jan 26 00:12:11 crc kubenswrapper[5124]: I0126 00:12:11.258909 5124 scope.go:117] "RemoveContainer" containerID="5550c9d24114d2b86df37d3cf1645f9455ef504a5b4c0810680a7b7c05ac758c" Jan 26 00:12:11 crc kubenswrapper[5124]: I0126 00:12:11.259551 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" event={"ID":"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0","Type":"ContainerStarted","Data":"91e9eedb5f22140dba6c4eb60ed3b41a728e23a888e2239467f14341e4ea1fc4"} Jan 26 00:12:11 crc kubenswrapper[5124]: I0126 00:12:11.301141 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-v5jrb"] Jan 26 00:12:11 crc kubenswrapper[5124]: I0126 00:12:11.303342 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-v5jrb"] Jan 26 00:12:12 crc kubenswrapper[5124]: I0126 00:12:12.136363 5124 ???:1] "http: TLS handshake error from 192.168.126.11:39100: no serving certificate available for the kubelet" Jan 26 00:12:12 crc kubenswrapper[5124]: I0126 00:12:12.270584 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" event={"ID":"7c7a0197-c1f8-42e2-bb22-48bae8fe5be0","Type":"ContainerStarted","Data":"aba8d8a632eb4a4c3337e6925d2cebe1a618edd751d247e537a49aa353876bde"} Jan 26 00:12:12 crc kubenswrapper[5124]: I0126 00:12:12.312724 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" podStartSLOduration=28.312692671 podStartE2EDuration="28.312692671s" podCreationTimestamp="2026-01-26 00:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:12.303967345 +0000 UTC m=+210.212886754" watchObservedRunningTime="2026-01-26 00:12:12.312692671 +0000 UTC m=+210.221612070" Jan 26 00:12:12 crc kubenswrapper[5124]: I0126 00:12:12.379321 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f" path="/var/lib/kubelet/pods/b41910c7-7e0f-4ae2-87b8-ffbee4a5fb8f/volumes" Jan 26 00:12:13 crc kubenswrapper[5124]: I0126 00:12:13.278862 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:13 crc kubenswrapper[5124]: I0126 00:12:13.285049 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-788ff9cfc5-c2tk6" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.892808 5124 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.893883 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b1e32d7d7a0137f2bc27ff1b6a2c7eadea48ec9c2b0832f560abf73951e16970" gracePeriod=15 Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.893944 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab" gracePeriod=15 Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.893959 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327" gracePeriod=15 Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.894127 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75" gracePeriod=15 Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.894255 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041" gracePeriod=15 Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.894695 5124 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.895496 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.896989 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897019 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897031 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897041 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897049 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897065 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897073 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897097 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897104 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897118 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897125 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897137 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897144 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897164 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897171 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897184 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897191 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897357 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897372 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897385 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897396 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897406 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897416 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897426 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897553 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897563 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897701 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.897714 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.904601 5124 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.912047 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.919883 5124 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 26 00:12:15 crc kubenswrapper[5124]: I0126 00:12:15.937915 5124 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.003932 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.003990 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.004023 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.004261 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.004524 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.004596 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.004666 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.004715 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.004781 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.004853 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: E0126 00:12:16.043446 5124 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:16 crc kubenswrapper[5124]: E0126 00:12:16.043784 5124 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:16 crc kubenswrapper[5124]: E0126 00:12:16.044134 5124 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:16 crc kubenswrapper[5124]: E0126 00:12:16.044394 5124 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:16 crc kubenswrapper[5124]: E0126 00:12:16.044648 5124 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.044675 5124 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 00:12:16 crc kubenswrapper[5124]: E0126 00:12:16.044887 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="200ms" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.105918 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.105961 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.105979 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106002 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106021 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106044 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106083 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106099 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106106 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106146 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106180 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106187 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106208 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106248 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106251 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106293 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106361 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106389 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106502 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.106569 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:16 crc kubenswrapper[5124]: E0126 00:12:16.246208 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="400ms" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.303667 5124 generic.go:358] "Generic (PLEG): container finished" podID="a3d95296-9ae1-4722-9d5d-bdd64e912859" containerID="11550495a254dd97b4e363714ba7723d3c0c77372f9c1b9b098891b1a4d5ea31" exitCode=0 Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.303739 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a3d95296-9ae1-4722-9d5d-bdd64e912859","Type":"ContainerDied","Data":"11550495a254dd97b4e363714ba7723d3c0c77372f9c1b9b098891b1a4d5ea31"} Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.304297 5124 status_manager.go:895] "Failed to get status for pod" podUID="a3d95296-9ae1-4722-9d5d-bdd64e912859" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.305837 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.307478 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.308118 5124 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b1e32d7d7a0137f2bc27ff1b6a2c7eadea48ec9c2b0832f560abf73951e16970" exitCode=0 Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.308138 5124 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327" exitCode=0 Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.308146 5124 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab" exitCode=0 Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.308152 5124 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75" exitCode=2 Jan 26 00:12:16 crc kubenswrapper[5124]: I0126 00:12:16.308191 5124 scope.go:117] "RemoveContainer" containerID="6215e20f15c7a51f410c9c54859dda249912a0f1e02d737e53f957cd8d73cd32" Jan 26 00:12:16 crc kubenswrapper[5124]: E0126 00:12:16.647990 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="800ms" Jan 26 00:12:17 crc kubenswrapper[5124]: I0126 00:12:17.316939 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:12:17 crc kubenswrapper[5124]: E0126 00:12:17.449768 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="1.6s" Jan 26 00:12:17 crc kubenswrapper[5124]: I0126 00:12:17.566077 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:17 crc kubenswrapper[5124]: I0126 00:12:17.566531 5124 status_manager.go:895] "Failed to get status for pod" podUID="a3d95296-9ae1-4722-9d5d-bdd64e912859" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:17 crc kubenswrapper[5124]: I0126 00:12:17.628154 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d95296-9ae1-4722-9d5d-bdd64e912859-kube-api-access\") pod \"a3d95296-9ae1-4722-9d5d-bdd64e912859\" (UID: \"a3d95296-9ae1-4722-9d5d-bdd64e912859\") " Jan 26 00:12:17 crc kubenswrapper[5124]: I0126 00:12:17.628279 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3d95296-9ae1-4722-9d5d-bdd64e912859-var-lock\") pod \"a3d95296-9ae1-4722-9d5d-bdd64e912859\" (UID: \"a3d95296-9ae1-4722-9d5d-bdd64e912859\") " Jan 26 00:12:17 crc kubenswrapper[5124]: I0126 00:12:17.628313 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3d95296-9ae1-4722-9d5d-bdd64e912859-kubelet-dir\") pod \"a3d95296-9ae1-4722-9d5d-bdd64e912859\" (UID: \"a3d95296-9ae1-4722-9d5d-bdd64e912859\") " Jan 26 00:12:17 crc kubenswrapper[5124]: I0126 00:12:17.628505 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d95296-9ae1-4722-9d5d-bdd64e912859-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a3d95296-9ae1-4722-9d5d-bdd64e912859" (UID: "a3d95296-9ae1-4722-9d5d-bdd64e912859"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:17 crc kubenswrapper[5124]: I0126 00:12:17.628532 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d95296-9ae1-4722-9d5d-bdd64e912859-var-lock" (OuterVolumeSpecName: "var-lock") pod "a3d95296-9ae1-4722-9d5d-bdd64e912859" (UID: "a3d95296-9ae1-4722-9d5d-bdd64e912859"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:17 crc kubenswrapper[5124]: I0126 00:12:17.633194 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d95296-9ae1-4722-9d5d-bdd64e912859-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a3d95296-9ae1-4722-9d5d-bdd64e912859" (UID: "a3d95296-9ae1-4722-9d5d-bdd64e912859"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:17 crc kubenswrapper[5124]: I0126 00:12:17.730037 5124 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3d95296-9ae1-4722-9d5d-bdd64e912859-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:17 crc kubenswrapper[5124]: I0126 00:12:17.730078 5124 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3d95296-9ae1-4722-9d5d-bdd64e912859-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:17 crc kubenswrapper[5124]: I0126 00:12:17.730091 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d95296-9ae1-4722-9d5d-bdd64e912859-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.283060 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.284053 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.284691 5124 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.285138 5124 status_manager.go:895] "Failed to get status for pod" podUID="a3d95296-9ae1-4722-9d5d-bdd64e912859" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.324425 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.324427 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a3d95296-9ae1-4722-9d5d-bdd64e912859","Type":"ContainerDied","Data":"bff2494ca9011639ff9b6a84ad09356f651154aaddc4a5e381f082dc1fda9513"} Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.324560 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bff2494ca9011639ff9b6a84ad09356f651154aaddc4a5e381f082dc1fda9513" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.326920 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.327491 5124 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041" exitCode=0 Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.327548 5124 scope.go:117] "RemoveContainer" containerID="b1e32d7d7a0137f2bc27ff1b6a2c7eadea48ec9c2b0832f560abf73951e16970" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.327615 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.339412 5124 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.339893 5124 status_manager.go:895] "Failed to get status for pod" podUID="a3d95296-9ae1-4722-9d5d-bdd64e912859" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.342364 5124 scope.go:117] "RemoveContainer" containerID="d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.353070 5124 scope.go:117] "RemoveContainer" containerID="6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.363161 5124 scope.go:117] "RemoveContainer" containerID="37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.373691 5124 scope.go:117] "RemoveContainer" containerID="2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.391001 5124 scope.go:117] "RemoveContainer" containerID="66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.436659 5124 scope.go:117] "RemoveContainer" containerID="b1e32d7d7a0137f2bc27ff1b6a2c7eadea48ec9c2b0832f560abf73951e16970" Jan 26 00:12:18 crc kubenswrapper[5124]: E0126 00:12:18.437037 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e32d7d7a0137f2bc27ff1b6a2c7eadea48ec9c2b0832f560abf73951e16970\": container with ID starting with b1e32d7d7a0137f2bc27ff1b6a2c7eadea48ec9c2b0832f560abf73951e16970 not found: ID does not exist" containerID="b1e32d7d7a0137f2bc27ff1b6a2c7eadea48ec9c2b0832f560abf73951e16970" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.437071 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e32d7d7a0137f2bc27ff1b6a2c7eadea48ec9c2b0832f560abf73951e16970"} err="failed to get container status \"b1e32d7d7a0137f2bc27ff1b6a2c7eadea48ec9c2b0832f560abf73951e16970\": rpc error: code = NotFound desc = could not find container \"b1e32d7d7a0137f2bc27ff1b6a2c7eadea48ec9c2b0832f560abf73951e16970\": container with ID starting with b1e32d7d7a0137f2bc27ff1b6a2c7eadea48ec9c2b0832f560abf73951e16970 not found: ID does not exist" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.437096 5124 scope.go:117] "RemoveContainer" containerID="d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327" Jan 26 00:12:18 crc kubenswrapper[5124]: E0126 00:12:18.437386 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327\": container with ID starting with d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327 not found: ID does not exist" containerID="d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.437414 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327"} err="failed to get container status \"d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327\": rpc error: code = NotFound desc = could not find container \"d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327\": container with ID starting with d8e80c933db284b36b8dafc7bc44abe6be54c57c85857f99b2194d01cced7327 not found: ID does not exist" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.437429 5124 scope.go:117] "RemoveContainer" containerID="6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab" Jan 26 00:12:18 crc kubenswrapper[5124]: E0126 00:12:18.437802 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab\": container with ID starting with 6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab not found: ID does not exist" containerID="6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.437859 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab"} err="failed to get container status \"6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab\": rpc error: code = NotFound desc = could not find container \"6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab\": container with ID starting with 6d8b9a76e6a593a00eb07a766e1124a3590c5c94c41c554bebb577109de5a4ab not found: ID does not exist" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.437892 5124 scope.go:117] "RemoveContainer" containerID="37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75" Jan 26 00:12:18 crc kubenswrapper[5124]: E0126 00:12:18.438154 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75\": container with ID starting with 37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75 not found: ID does not exist" containerID="37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.438184 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75"} err="failed to get container status \"37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75\": rpc error: code = NotFound desc = could not find container \"37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75\": container with ID starting with 37fbcde7240eabecd5368c44cfa4027f8d40c4f52393eb773692e55130233c75 not found: ID does not exist" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.438201 5124 scope.go:117] "RemoveContainer" containerID="2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041" Jan 26 00:12:18 crc kubenswrapper[5124]: E0126 00:12:18.438421 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041\": container with ID starting with 2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041 not found: ID does not exist" containerID="2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.438447 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041"} err="failed to get container status \"2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041\": rpc error: code = NotFound desc = could not find container \"2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041\": container with ID starting with 2f8ecaa38888bb973b4fd3205014aa0edb7c85e52834f767b37256195a18e041 not found: ID does not exist" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.438459 5124 scope.go:117] "RemoveContainer" containerID="66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc" Jan 26 00:12:18 crc kubenswrapper[5124]: E0126 00:12:18.438656 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\": container with ID starting with 66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc not found: ID does not exist" containerID="66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.438694 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc"} err="failed to get container status \"66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\": rpc error: code = NotFound desc = could not find container \"66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc\": container with ID starting with 66f10dcb1c1f631da6488a7b4271bb9abc58d887ad17e7515550b916cf9a60cc not found: ID does not exist" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.451226 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.451273 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.451283 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.451367 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.451392 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.451413 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.451403 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.451460 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.452106 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.452350 5124 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.452425 5124 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.452443 5124 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.454693 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.553417 5124 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.553451 5124 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.641937 5124 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:18 crc kubenswrapper[5124]: I0126 00:12:18.642464 5124 status_manager.go:895] "Failed to get status for pod" podUID="a3d95296-9ae1-4722-9d5d-bdd64e912859" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:19 crc kubenswrapper[5124]: E0126 00:12:19.051778 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="3.2s" Jan 26 00:12:20 crc kubenswrapper[5124]: I0126 00:12:20.375516 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 26 00:12:21 crc kubenswrapper[5124]: E0126 00:12:20.939822 5124 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:21 crc kubenswrapper[5124]: I0126 00:12:20.940501 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:21 crc kubenswrapper[5124]: W0126 00:12:20.971163 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-91d69d3ff9aab9bc4ab9c8cb3e7dfedc8f138f271bb9dcda24ecc6b1a75b5230 WatchSource:0}: Error finding container 91d69d3ff9aab9bc4ab9c8cb3e7dfedc8f138f271bb9dcda24ecc6b1a75b5230: Status 404 returned error can't find the container with id 91d69d3ff9aab9bc4ab9c8cb3e7dfedc8f138f271bb9dcda24ecc6b1a75b5230 Jan 26 00:12:21 crc kubenswrapper[5124]: E0126 00:12:20.974578 5124 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e1f80a42effe5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:12:20.974002149 +0000 UTC m=+218.882921498,LastTimestamp:2026-01-26 00:12:20.974002149 +0000 UTC m=+218.882921498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:12:21 crc kubenswrapper[5124]: I0126 00:12:21.351345 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"91d69d3ff9aab9bc4ab9c8cb3e7dfedc8f138f271bb9dcda24ecc6b1a75b5230"} Jan 26 00:12:22 crc kubenswrapper[5124]: E0126 00:12:22.252758 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="6.4s" Jan 26 00:12:22 crc kubenswrapper[5124]: I0126 00:12:22.368535 5124 status_manager.go:895] "Failed to get status for pod" podUID="a3d95296-9ae1-4722-9d5d-bdd64e912859" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:22 crc kubenswrapper[5124]: I0126 00:12:22.368937 5124 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:22 crc kubenswrapper[5124]: E0126 00:12:22.369207 5124 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:22 crc kubenswrapper[5124]: I0126 00:12:22.369335 5124 status_manager.go:895] "Failed to get status for pod" podUID="a3d95296-9ae1-4722-9d5d-bdd64e912859" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:22 crc kubenswrapper[5124]: I0126 00:12:22.384683 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"b97eb685ffe79f227a9c02e63d305006d67b7ea602a8521b5b731a207679c2a8"} Jan 26 00:12:22 crc kubenswrapper[5124]: E0126 00:12:22.412226 5124 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" volumeName="registry-storage" Jan 26 00:12:23 crc kubenswrapper[5124]: I0126 00:12:23.374072 5124 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:23 crc kubenswrapper[5124]: E0126 00:12:23.374852 5124 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:26 crc kubenswrapper[5124]: E0126 00:12:26.894896 5124 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e1f80a42effe5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:12:20.974002149 +0000 UTC m=+218.882921498,LastTimestamp:2026-01-26 00:12:20.974002149 +0000 UTC m=+218.882921498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:12:27 crc kubenswrapper[5124]: I0126 00:12:27.364710 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:27 crc kubenswrapper[5124]: I0126 00:12:27.365880 5124 status_manager.go:895] "Failed to get status for pod" podUID="a3d95296-9ae1-4722-9d5d-bdd64e912859" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:27 crc kubenswrapper[5124]: I0126 00:12:27.391225 5124 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4fa44516-2654-456d-893a-96341101557c" Jan 26 00:12:27 crc kubenswrapper[5124]: I0126 00:12:27.391280 5124 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4fa44516-2654-456d-893a-96341101557c" Jan 26 00:12:27 crc kubenswrapper[5124]: E0126 00:12:27.391775 5124 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:27 crc kubenswrapper[5124]: I0126 00:12:27.392303 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:28 crc kubenswrapper[5124]: I0126 00:12:28.405971 5124 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="a9d317d4283213df38ea8929bc03020e36875e73523b42d9279977003bb35349" exitCode=0 Jan 26 00:12:28 crc kubenswrapper[5124]: I0126 00:12:28.406027 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"a9d317d4283213df38ea8929bc03020e36875e73523b42d9279977003bb35349"} Jan 26 00:12:28 crc kubenswrapper[5124]: I0126 00:12:28.406055 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"0d1f85265f1b39ba714fa9e6021483cb52636aca80dbb16624a71fb599fbd1c8"} Jan 26 00:12:28 crc kubenswrapper[5124]: I0126 00:12:28.406371 5124 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4fa44516-2654-456d-893a-96341101557c" Jan 26 00:12:28 crc kubenswrapper[5124]: I0126 00:12:28.406386 5124 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4fa44516-2654-456d-893a-96341101557c" Jan 26 00:12:28 crc kubenswrapper[5124]: E0126 00:12:28.406916 5124 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:28 crc kubenswrapper[5124]: I0126 00:12:28.407258 5124 status_manager.go:895] "Failed to get status for pod" podUID="a3d95296-9ae1-4722-9d5d-bdd64e912859" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 26 00:12:28 crc kubenswrapper[5124]: E0126 00:12:28.654486 5124 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="7s" Jan 26 00:12:29 crc kubenswrapper[5124]: I0126 00:12:29.417085 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c4f5f99fea51c06dad40a73c0fe4fc8eecbd3c58220167b1dbd3204cbaa89bac"} Jan 26 00:12:29 crc kubenswrapper[5124]: I0126 00:12:29.417276 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ffa769978ca191a633954f130996df97c2c235414b86fc983ad8cf2704cf1e50"} Jan 26 00:12:29 crc kubenswrapper[5124]: I0126 00:12:29.417298 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"0644370bb215dff42e30aad1271f4e9ea5e014c1324873a7ae727be06825cc37"} Jan 26 00:12:29 crc kubenswrapper[5124]: I0126 00:12:29.417316 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"4c09f448cc542f7564d321e3c01182cac43ba3a00719e37e31f833b6f0d6e746"} Jan 26 00:12:30 crc kubenswrapper[5124]: I0126 00:12:30.425235 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"f740ece5c535d96b55001b983927974fa63632f06a3df52104fe3633197e9de6"} Jan 26 00:12:30 crc kubenswrapper[5124]: I0126 00:12:30.425493 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:30 crc kubenswrapper[5124]: I0126 00:12:30.425723 5124 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4fa44516-2654-456d-893a-96341101557c" Jan 26 00:12:30 crc kubenswrapper[5124]: I0126 00:12:30.425747 5124 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4fa44516-2654-456d-893a-96341101557c" Jan 26 00:12:31 crc kubenswrapper[5124]: I0126 00:12:31.384686 5124 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 00:12:31 crc kubenswrapper[5124]: I0126 00:12:31.385045 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 00:12:31 crc kubenswrapper[5124]: I0126 00:12:31.433188 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:12:31 crc kubenswrapper[5124]: I0126 00:12:31.434093 5124 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="6a4d65f95ca5f832e6ac85de46fd3d474221c3263ab1c2eba3123e4742fc5287" exitCode=1 Jan 26 00:12:31 crc kubenswrapper[5124]: I0126 00:12:31.434180 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"6a4d65f95ca5f832e6ac85de46fd3d474221c3263ab1c2eba3123e4742fc5287"} Jan 26 00:12:31 crc kubenswrapper[5124]: I0126 00:12:31.435162 5124 scope.go:117] "RemoveContainer" containerID="6a4d65f95ca5f832e6ac85de46fd3d474221c3263ab1c2eba3123e4742fc5287" Jan 26 00:12:32 crc kubenswrapper[5124]: I0126 00:12:32.393101 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:32 crc kubenswrapper[5124]: I0126 00:12:32.393488 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:32 crc kubenswrapper[5124]: I0126 00:12:32.402615 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:32 crc kubenswrapper[5124]: I0126 00:12:32.454331 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:12:32 crc kubenswrapper[5124]: I0126 00:12:32.454408 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"1d66c39951d197dcd14c4fa5180e990e4b0f011d7fbee220ce81518e9f83e7ff"} Jan 26 00:12:35 crc kubenswrapper[5124]: I0126 00:12:35.581109 5124 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:35 crc kubenswrapper[5124]: I0126 00:12:35.581444 5124 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:35 crc kubenswrapper[5124]: I0126 00:12:35.667932 5124 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="9fe286f0-d302-42e7-92a6-f82feb39c2f7" Jan 26 00:12:36 crc kubenswrapper[5124]: I0126 00:12:36.476773 5124 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4fa44516-2654-456d-893a-96341101557c" Jan 26 00:12:36 crc kubenswrapper[5124]: I0126 00:12:36.477013 5124 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4fa44516-2654-456d-893a-96341101557c" Jan 26 00:12:36 crc kubenswrapper[5124]: I0126 00:12:36.479419 5124 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="9fe286f0-d302-42e7-92a6-f82feb39c2f7" Jan 26 00:12:36 crc kubenswrapper[5124]: I0126 00:12:36.481054 5124 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://4c09f448cc542f7564d321e3c01182cac43ba3a00719e37e31f833b6f0d6e746" Jan 26 00:12:36 crc kubenswrapper[5124]: I0126 00:12:36.481078 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:37 crc kubenswrapper[5124]: I0126 00:12:37.481397 5124 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4fa44516-2654-456d-893a-96341101557c" Jan 26 00:12:37 crc kubenswrapper[5124]: I0126 00:12:37.481424 5124 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4fa44516-2654-456d-893a-96341101557c" Jan 26 00:12:37 crc kubenswrapper[5124]: I0126 00:12:37.484538 5124 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="9fe286f0-d302-42e7-92a6-f82feb39c2f7" Jan 26 00:12:40 crc kubenswrapper[5124]: I0126 00:12:40.829887 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:12:40 crc kubenswrapper[5124]: I0126 00:12:40.830265 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:12:41 crc kubenswrapper[5124]: I0126 00:12:41.187711 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:12:41 crc kubenswrapper[5124]: I0126 00:12:41.194504 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:12:41 crc kubenswrapper[5124]: I0126 00:12:41.384103 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:12:41 crc kubenswrapper[5124]: I0126 00:12:41.389253 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:12:45 crc kubenswrapper[5124]: I0126 00:12:45.329858 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:12:45 crc kubenswrapper[5124]: I0126 00:12:45.486662 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:12:45 crc kubenswrapper[5124]: I0126 00:12:45.973111 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 26 00:12:46 crc kubenswrapper[5124]: I0126 00:12:46.336263 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 26 00:12:46 crc kubenswrapper[5124]: I0126 00:12:46.480708 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:12:46 crc kubenswrapper[5124]: I0126 00:12:46.785727 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 26 00:12:46 crc kubenswrapper[5124]: I0126 00:12:46.834470 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:46 crc kubenswrapper[5124]: I0126 00:12:46.850549 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 26 00:12:47 crc kubenswrapper[5124]: I0126 00:12:47.036248 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 26 00:12:47 crc kubenswrapper[5124]: I0126 00:12:47.093406 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 26 00:12:47 crc kubenswrapper[5124]: I0126 00:12:47.115251 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 26 00:12:47 crc kubenswrapper[5124]: I0126 00:12:47.227059 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 26 00:12:47 crc kubenswrapper[5124]: I0126 00:12:47.800529 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 26 00:12:47 crc kubenswrapper[5124]: I0126 00:12:47.855553 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 26 00:12:47 crc kubenswrapper[5124]: I0126 00:12:47.867128 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 26 00:12:47 crc kubenswrapper[5124]: I0126 00:12:47.943362 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 26 00:12:48 crc kubenswrapper[5124]: I0126 00:12:48.112668 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 26 00:12:48 crc kubenswrapper[5124]: I0126 00:12:48.199047 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:48 crc kubenswrapper[5124]: I0126 00:12:48.355436 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:12:48 crc kubenswrapper[5124]: I0126 00:12:48.382716 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:12:48 crc kubenswrapper[5124]: I0126 00:12:48.511714 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 26 00:12:48 crc kubenswrapper[5124]: I0126 00:12:48.826019 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 26 00:12:48 crc kubenswrapper[5124]: I0126 00:12:48.888647 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 26 00:12:48 crc kubenswrapper[5124]: I0126 00:12:48.927059 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.096432 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.172367 5124 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.176640 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.176689 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.187284 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.196725 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.196703263 podStartE2EDuration="14.196703263s" podCreationTimestamp="2026-01-26 00:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:49.196179499 +0000 UTC m=+247.105098848" watchObservedRunningTime="2026-01-26 00:12:49.196703263 +0000 UTC m=+247.105622632" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.314617 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.337699 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.454147 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.499859 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.525977 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.526705 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.532230 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.734114 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.838009 5124 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.878979 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.955383 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:49 crc kubenswrapper[5124]: I0126 00:12:49.955407 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.029143 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.123278 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.170333 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.220171 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.420811 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.486108 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.605136 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.708317 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.726506 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.773991 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.776473 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.917635 5124 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.948133 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 26 00:12:50 crc kubenswrapper[5124]: I0126 00:12:50.996225 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.055281 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.055333 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.145990 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.284406 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.326487 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.341702 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.494354 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.528819 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.621243 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.642445 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.731793 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.784526 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.845610 5124 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.852292 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.871333 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.899333 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 26 00:12:51 crc kubenswrapper[5124]: I0126 00:12:51.981693 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.036257 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.059261 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.071389 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.119193 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.127846 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.292890 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.325515 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.368900 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.403663 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.409231 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.412221 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.511173 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.549572 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.550636 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.556302 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.602330 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.763723 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.874846 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.927161 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 26 00:12:52 crc kubenswrapper[5124]: I0126 00:12:52.995423 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.015671 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.132240 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.150098 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.196560 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.326384 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.343447 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.352705 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.456226 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.457461 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.544256 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.574298 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.623510 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.692978 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.794545 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.814295 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.893609 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.947840 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.968469 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 26 00:12:53 crc kubenswrapper[5124]: I0126 00:12:53.978169 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.070194 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.078686 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.102575 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.123769 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.135082 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.152361 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.292900 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.308928 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.321689 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.388777 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.409225 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.710097 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.812023 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.869179 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:12:54 crc kubenswrapper[5124]: I0126 00:12:54.987991 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.082537 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.152152 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.164833 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.181938 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.228952 5124 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.249661 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.365281 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.371969 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.396974 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.466335 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.509835 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.527055 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.553742 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.568362 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.618719 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.638286 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.641873 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.665313 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.686353 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.703441 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.723732 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.807435 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.845476 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.890371 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.895886 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.942537 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 26 00:12:55 crc kubenswrapper[5124]: I0126 00:12:55.954833 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.000496 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.013330 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.038817 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.116604 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.223008 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.224651 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.229087 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.231220 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.353171 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.417095 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.445305 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.445571 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.470513 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.528673 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.542607 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.650371 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.699083 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.756858 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.799470 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.869575 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.909920 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.925641 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.948008 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.966556 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.982245 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.983221 5124 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:12:56 crc kubenswrapper[5124]: I0126 00:12:56.983495 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://b97eb685ffe79f227a9c02e63d305006d67b7ea602a8521b5b731a207679c2a8" gracePeriod=5 Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.017293 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.125660 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.178566 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.304270 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.386555 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.391373 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.483997 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.512716 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.587503 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.738491 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.745797 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.862200 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.901337 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 26 00:12:57 crc kubenswrapper[5124]: I0126 00:12:57.939431 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.019753 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.024831 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.092864 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.176962 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.213098 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.220018 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.290252 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.571715 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.573216 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.604921 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.742542 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.748374 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.779604 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.784720 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.795247 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.803111 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.835150 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 26 00:12:58 crc kubenswrapper[5124]: I0126 00:12:58.948499 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.032044 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.181475 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.210787 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.276960 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.289544 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.316175 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.463118 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.574313 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.687528 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.792549 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.814486 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.827181 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.902660 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.904713 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.948314 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.957173 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 26 00:12:59 crc kubenswrapper[5124]: I0126 00:12:59.972510 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.066008 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.082326 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.163955 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.191519 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.213004 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.363664 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.457186 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.569735 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.625342 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.643136 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.690397 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.691562 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.695038 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 26 00:13:00 crc kubenswrapper[5124]: I0126 00:13:00.729093 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 26 00:13:01 crc kubenswrapper[5124]: I0126 00:13:01.287628 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 26 00:13:01 crc kubenswrapper[5124]: I0126 00:13:01.334187 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 26 00:13:01 crc kubenswrapper[5124]: I0126 00:13:01.338814 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 26 00:13:01 crc kubenswrapper[5124]: I0126 00:13:01.391781 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 26 00:13:01 crc kubenswrapper[5124]: I0126 00:13:01.443447 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 26 00:13:01 crc kubenswrapper[5124]: I0126 00:13:01.533421 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 26 00:13:01 crc kubenswrapper[5124]: I0126 00:13:01.654259 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 26 00:13:01 crc kubenswrapper[5124]: I0126 00:13:01.655618 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 26 00:13:01 crc kubenswrapper[5124]: I0126 00:13:01.819967 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 26 00:13:01 crc kubenswrapper[5124]: I0126 00:13:01.842903 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.032554 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.219379 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.300031 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.420542 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.554676 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.554949 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.556826 5124 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.623729 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.623843 5124 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="b97eb685ffe79f227a9c02e63d305006d67b7ea602a8521b5b731a207679c2a8" exitCode=137 Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.623961 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.624087 5124 scope.go:117] "RemoveContainer" containerID="b97eb685ffe79f227a9c02e63d305006d67b7ea602a8521b5b731a207679c2a8" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.643093 5124 scope.go:117] "RemoveContainer" containerID="b97eb685ffe79f227a9c02e63d305006d67b7ea602a8521b5b731a207679c2a8" Jan 26 00:13:02 crc kubenswrapper[5124]: E0126 00:13:02.644043 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b97eb685ffe79f227a9c02e63d305006d67b7ea602a8521b5b731a207679c2a8\": container with ID starting with b97eb685ffe79f227a9c02e63d305006d67b7ea602a8521b5b731a207679c2a8 not found: ID does not exist" containerID="b97eb685ffe79f227a9c02e63d305006d67b7ea602a8521b5b731a207679c2a8" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.644078 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b97eb685ffe79f227a9c02e63d305006d67b7ea602a8521b5b731a207679c2a8"} err="failed to get container status \"b97eb685ffe79f227a9c02e63d305006d67b7ea602a8521b5b731a207679c2a8\": rpc error: code = NotFound desc = could not find container \"b97eb685ffe79f227a9c02e63d305006d67b7ea602a8521b5b731a207679c2a8\": container with ID starting with b97eb685ffe79f227a9c02e63d305006d67b7ea602a8521b5b731a207679c2a8 not found: ID does not exist" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.648981 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.649172 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.649561 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.649704 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.649794 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.649967 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.650019 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.650115 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.650171 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.650392 5124 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.650469 5124 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.650529 5124 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.650612 5124 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.658769 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.752028 5124 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:02 crc kubenswrapper[5124]: I0126 00:13:02.951825 5124 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 26 00:13:03 crc kubenswrapper[5124]: I0126 00:13:03.132257 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 26 00:13:03 crc kubenswrapper[5124]: I0126 00:13:03.496306 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 26 00:13:04 crc kubenswrapper[5124]: I0126 00:13:04.374427 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 26 00:13:04 crc kubenswrapper[5124]: I0126 00:13:04.434721 5124 ???:1] "http: TLS handshake error from 192.168.126.11:57990: no serving certificate available for the kubelet" Jan 26 00:13:04 crc kubenswrapper[5124]: I0126 00:13:04.743079 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 26 00:13:05 crc kubenswrapper[5124]: I0126 00:13:05.435125 5124 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:13:10 crc kubenswrapper[5124]: I0126 00:13:10.830422 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:13:10 crc kubenswrapper[5124]: I0126 00:13:10.830819 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:13:19 crc kubenswrapper[5124]: I0126 00:13:19.418737 5124 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-5hwt4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 26 00:13:19 crc kubenswrapper[5124]: I0126 00:13:19.420550 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" podUID="973d580d-7e62-419e-be96-115733ca98bf" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 26 00:13:19 crc kubenswrapper[5124]: I0126 00:13:19.739568 5124 generic.go:358] "Generic (PLEG): container finished" podID="973d580d-7e62-419e-be96-115733ca98bf" containerID="8e9091f8fed28f88cf73c06f29899ff7362d84ec97673a79cb6fcebd3feb183a" exitCode=0 Jan 26 00:13:19 crc kubenswrapper[5124]: I0126 00:13:19.739682 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" event={"ID":"973d580d-7e62-419e-be96-115733ca98bf","Type":"ContainerDied","Data":"8e9091f8fed28f88cf73c06f29899ff7362d84ec97673a79cb6fcebd3feb183a"} Jan 26 00:13:19 crc kubenswrapper[5124]: I0126 00:13:19.740492 5124 scope.go:117] "RemoveContainer" containerID="8e9091f8fed28f88cf73c06f29899ff7362d84ec97673a79cb6fcebd3feb183a" Jan 26 00:13:20 crc kubenswrapper[5124]: I0126 00:13:20.748265 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" event={"ID":"973d580d-7e62-419e-be96-115733ca98bf","Type":"ContainerStarted","Data":"a544e3ba1690c2df00e3fec1ddda712f20e83b51cbd2413032be903a9db9297b"} Jan 26 00:13:20 crc kubenswrapper[5124]: I0126 00:13:20.749849 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:13:20 crc kubenswrapper[5124]: I0126 00:13:20.755504 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.127219 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5cjkn"] Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.128459 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" podUID="26da0b98-2814-44cd-b28b-a1b2ef0ee88e" containerName="controller-manager" containerID="cri-o://fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635" gracePeriod=30 Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.162079 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j"] Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.162708 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" podUID="acdc983c-4d4e-4a1e-82a3-a137fe39882a" containerName="route-controller-manager" containerID="cri-o://4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8" gracePeriod=30 Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.495147 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.525612 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.545303 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz"] Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.546211 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3d95296-9ae1-4722-9d5d-bdd64e912859" containerName="installer" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.546238 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d95296-9ae1-4722-9d5d-bdd64e912859" containerName="installer" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.546265 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.546278 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.546313 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="26da0b98-2814-44cd-b28b-a1b2ef0ee88e" containerName="controller-manager" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.546321 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="26da0b98-2814-44cd-b28b-a1b2ef0ee88e" containerName="controller-manager" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.546342 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="acdc983c-4d4e-4a1e-82a3-a137fe39882a" containerName="route-controller-manager" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.546352 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="acdc983c-4d4e-4a1e-82a3-a137fe39882a" containerName="route-controller-manager" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.546476 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="a3d95296-9ae1-4722-9d5d-bdd64e912859" containerName="installer" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.546491 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.546502 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="26da0b98-2814-44cd-b28b-a1b2ef0ee88e" containerName="controller-manager" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.546511 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="acdc983c-4d4e-4a1e-82a3-a137fe39882a" containerName="route-controller-manager" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.555261 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz"] Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.555472 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.563811 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2"] Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.569408 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2"] Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.569704 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.639755 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-client-ca\") pod \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.639823 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7cvw\" (UniqueName: \"kubernetes.io/projected/acdc983c-4d4e-4a1e-82a3-a137fe39882a-kube-api-access-z7cvw\") pod \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.639862 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-config\") pod \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.639882 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acdc983c-4d4e-4a1e-82a3-a137fe39882a-config\") pod \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.639901 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-proxy-ca-bundles\") pod \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.639929 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acdc983c-4d4e-4a1e-82a3-a137fe39882a-client-ca\") pod \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.639993 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-serving-cert\") pod \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640046 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acdc983c-4d4e-4a1e-82a3-a137fe39882a-serving-cert\") pod \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640073 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acdc983c-4d4e-4a1e-82a3-a137fe39882a-tmp\") pod \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\" (UID: \"acdc983c-4d4e-4a1e-82a3-a137fe39882a\") " Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640092 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmp97\" (UniqueName: \"kubernetes.io/projected/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-kube-api-access-zmp97\") pod \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640124 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-tmp\") pod \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\" (UID: \"26da0b98-2814-44cd-b28b-a1b2ef0ee88e\") " Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640236 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-client-ca\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640295 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-serving-cert\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640324 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb50899-ae29-4ecc-b17a-98d10491e5dd-serving-cert\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640351 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m2cd\" (UniqueName: \"kubernetes.io/projected/deb50899-ae29-4ecc-b17a-98d10491e5dd-kube-api-access-6m2cd\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640376 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-client-ca\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640410 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-proxy-ca-bundles\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640434 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-config\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640461 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-tmp\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640488 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-config\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640522 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/deb50899-ae29-4ecc-b17a-98d10491e5dd-tmp\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640550 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vnqn\" (UniqueName: \"kubernetes.io/projected/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-kube-api-access-4vnqn\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640849 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-client-ca" (OuterVolumeSpecName: "client-ca") pod "26da0b98-2814-44cd-b28b-a1b2ef0ee88e" (UID: "26da0b98-2814-44cd-b28b-a1b2ef0ee88e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.640953 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-config" (OuterVolumeSpecName: "config") pod "26da0b98-2814-44cd-b28b-a1b2ef0ee88e" (UID: "26da0b98-2814-44cd-b28b-a1b2ef0ee88e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.641230 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acdc983c-4d4e-4a1e-82a3-a137fe39882a-tmp" (OuterVolumeSpecName: "tmp") pod "acdc983c-4d4e-4a1e-82a3-a137fe39882a" (UID: "acdc983c-4d4e-4a1e-82a3-a137fe39882a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.641547 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acdc983c-4d4e-4a1e-82a3-a137fe39882a-client-ca" (OuterVolumeSpecName: "client-ca") pod "acdc983c-4d4e-4a1e-82a3-a137fe39882a" (UID: "acdc983c-4d4e-4a1e-82a3-a137fe39882a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.641565 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "26da0b98-2814-44cd-b28b-a1b2ef0ee88e" (UID: "26da0b98-2814-44cd-b28b-a1b2ef0ee88e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.642176 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-tmp" (OuterVolumeSpecName: "tmp") pod "26da0b98-2814-44cd-b28b-a1b2ef0ee88e" (UID: "26da0b98-2814-44cd-b28b-a1b2ef0ee88e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.642318 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acdc983c-4d4e-4a1e-82a3-a137fe39882a-config" (OuterVolumeSpecName: "config") pod "acdc983c-4d4e-4a1e-82a3-a137fe39882a" (UID: "acdc983c-4d4e-4a1e-82a3-a137fe39882a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.649924 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-kube-api-access-zmp97" (OuterVolumeSpecName: "kube-api-access-zmp97") pod "26da0b98-2814-44cd-b28b-a1b2ef0ee88e" (UID: "26da0b98-2814-44cd-b28b-a1b2ef0ee88e"). InnerVolumeSpecName "kube-api-access-zmp97". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.649900 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "26da0b98-2814-44cd-b28b-a1b2ef0ee88e" (UID: "26da0b98-2814-44cd-b28b-a1b2ef0ee88e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.649958 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acdc983c-4d4e-4a1e-82a3-a137fe39882a-kube-api-access-z7cvw" (OuterVolumeSpecName: "kube-api-access-z7cvw") pod "acdc983c-4d4e-4a1e-82a3-a137fe39882a" (UID: "acdc983c-4d4e-4a1e-82a3-a137fe39882a"). InnerVolumeSpecName "kube-api-access-z7cvw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.650276 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdc983c-4d4e-4a1e-82a3-a137fe39882a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "acdc983c-4d4e-4a1e-82a3-a137fe39882a" (UID: "acdc983c-4d4e-4a1e-82a3-a137fe39882a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.741314 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-serving-cert\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.741392 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb50899-ae29-4ecc-b17a-98d10491e5dd-serving-cert\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.741421 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6m2cd\" (UniqueName: \"kubernetes.io/projected/deb50899-ae29-4ecc-b17a-98d10491e5dd-kube-api-access-6m2cd\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.741453 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-client-ca\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.741483 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-proxy-ca-bundles\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.741511 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-config\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.741964 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-tmp\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742055 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-config\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742129 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/deb50899-ae29-4ecc-b17a-98d10491e5dd-tmp\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742189 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4vnqn\" (UniqueName: \"kubernetes.io/projected/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-kube-api-access-4vnqn\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742247 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-client-ca\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742351 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acdc983c-4d4e-4a1e-82a3-a137fe39882a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742373 5124 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acdc983c-4d4e-4a1e-82a3-a137fe39882a-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742385 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zmp97\" (UniqueName: \"kubernetes.io/projected/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-kube-api-access-zmp97\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742398 5124 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742411 5124 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742422 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z7cvw\" (UniqueName: \"kubernetes.io/projected/acdc983c-4d4e-4a1e-82a3-a137fe39882a-kube-api-access-z7cvw\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742437 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742448 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acdc983c-4d4e-4a1e-82a3-a137fe39882a-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742459 5124 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742471 5124 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acdc983c-4d4e-4a1e-82a3-a137fe39882a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.742483 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26da0b98-2814-44cd-b28b-a1b2ef0ee88e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.743031 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-tmp\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.743610 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-config\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.743614 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-client-ca\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.743993 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/deb50899-ae29-4ecc-b17a-98d10491e5dd-tmp\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.744240 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-proxy-ca-bundles\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.745826 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-config\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.748191 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb50899-ae29-4ecc-b17a-98d10491e5dd-serving-cert\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.749761 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-client-ca\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.749852 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-serving-cert\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.763661 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m2cd\" (UniqueName: \"kubernetes.io/projected/deb50899-ae29-4ecc-b17a-98d10491e5dd-kube-api-access-6m2cd\") pod \"controller-manager-689cfd7b8c-zzdzz\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.764444 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vnqn\" (UniqueName: \"kubernetes.io/projected/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-kube-api-access-4vnqn\") pod \"route-controller-manager-6c7df55c84-gnzq2\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.835272 5124 generic.go:358] "Generic (PLEG): container finished" podID="acdc983c-4d4e-4a1e-82a3-a137fe39882a" containerID="4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8" exitCode=0 Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.835457 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" event={"ID":"acdc983c-4d4e-4a1e-82a3-a137fe39882a","Type":"ContainerDied","Data":"4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8"} Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.835530 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.835578 5124 scope.go:117] "RemoveContainer" containerID="4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.835552 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j" event={"ID":"acdc983c-4d4e-4a1e-82a3-a137fe39882a","Type":"ContainerDied","Data":"8eccfd027be1754d9e541a74d57b4ca5fcf299da03361198c8914b88298b9c3f"} Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.840823 5124 generic.go:358] "Generic (PLEG): container finished" podID="26da0b98-2814-44cd-b28b-a1b2ef0ee88e" containerID="fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635" exitCode=0 Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.841049 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" event={"ID":"26da0b98-2814-44cd-b28b-a1b2ef0ee88e","Type":"ContainerDied","Data":"fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635"} Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.841086 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" event={"ID":"26da0b98-2814-44cd-b28b-a1b2ef0ee88e","Type":"ContainerDied","Data":"d77be7a904260d259be8993948dd0a5a7a04c32d8b2eb50b69eb6adaf76758e7"} Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.841196 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-5cjkn" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.865451 5124 scope.go:117] "RemoveContainer" containerID="4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8" Jan 26 00:13:32 crc kubenswrapper[5124]: E0126 00:13:32.866166 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8\": container with ID starting with 4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8 not found: ID does not exist" containerID="4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.866248 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8"} err="failed to get container status \"4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8\": rpc error: code = NotFound desc = could not find container \"4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8\": container with ID starting with 4292b9719fce00119e65ff2e3454f405c72ed92cd22b001a947af79ad57847d8 not found: ID does not exist" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.866301 5124 scope.go:117] "RemoveContainer" containerID="fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.886013 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.897503 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.898138 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5cjkn"] Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.900577 5124 scope.go:117] "RemoveContainer" containerID="fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635" Jan 26 00:13:32 crc kubenswrapper[5124]: E0126 00:13:32.902726 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635\": container with ID starting with fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635 not found: ID does not exist" containerID="fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.902799 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635"} err="failed to get container status \"fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635\": rpc error: code = NotFound desc = could not find container \"fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635\": container with ID starting with fde88ae17d1ae04c73e8b87aff76ecc77c94e2d70b293268ec05fb2e36533635 not found: ID does not exist" Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.907150 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-5cjkn"] Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.915032 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j"] Jan 26 00:13:32 crc kubenswrapper[5124]: I0126 00:13:32.921086 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-f6l2j"] Jan 26 00:13:33 crc kubenswrapper[5124]: I0126 00:13:33.209105 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz"] Jan 26 00:13:33 crc kubenswrapper[5124]: I0126 00:13:33.244691 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2"] Jan 26 00:13:33 crc kubenswrapper[5124]: W0126 00:13:33.252945 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d8fc130_cec0_4184_a7c4_b17ed8ebe0bd.slice/crio-a291730dfea24e0997fa385bea6030cbb794b3f9b705ecf5b97dc4e9fab359d6 WatchSource:0}: Error finding container a291730dfea24e0997fa385bea6030cbb794b3f9b705ecf5b97dc4e9fab359d6: Status 404 returned error can't find the container with id a291730dfea24e0997fa385bea6030cbb794b3f9b705ecf5b97dc4e9fab359d6 Jan 26 00:13:33 crc kubenswrapper[5124]: I0126 00:13:33.872559 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" event={"ID":"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd","Type":"ContainerStarted","Data":"1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844"} Jan 26 00:13:33 crc kubenswrapper[5124]: I0126 00:13:33.873141 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" event={"ID":"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd","Type":"ContainerStarted","Data":"a291730dfea24e0997fa385bea6030cbb794b3f9b705ecf5b97dc4e9fab359d6"} Jan 26 00:13:33 crc kubenswrapper[5124]: I0126 00:13:33.873876 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:33 crc kubenswrapper[5124]: I0126 00:13:33.878233 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" event={"ID":"deb50899-ae29-4ecc-b17a-98d10491e5dd","Type":"ContainerStarted","Data":"0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657"} Jan 26 00:13:33 crc kubenswrapper[5124]: I0126 00:13:33.878280 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" event={"ID":"deb50899-ae29-4ecc-b17a-98d10491e5dd","Type":"ContainerStarted","Data":"07d9cf1ccbd27f1ce0f4336b2c7de9c664e9b3f4143ec9052b4630c7162eebb3"} Jan 26 00:13:33 crc kubenswrapper[5124]: I0126 00:13:33.879173 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:33 crc kubenswrapper[5124]: I0126 00:13:33.902412 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" podStartSLOduration=1.902375366 podStartE2EDuration="1.902375366s" podCreationTimestamp="2026-01-26 00:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:13:33.898010924 +0000 UTC m=+291.806930293" watchObservedRunningTime="2026-01-26 00:13:33.902375366 +0000 UTC m=+291.811294725" Jan 26 00:13:33 crc kubenswrapper[5124]: I0126 00:13:33.927867 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" podStartSLOduration=1.9278402030000001 podStartE2EDuration="1.927840203s" podCreationTimestamp="2026-01-26 00:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:13:33.920883679 +0000 UTC m=+291.829803028" watchObservedRunningTime="2026-01-26 00:13:33.927840203 +0000 UTC m=+291.836759572" Jan 26 00:13:33 crc kubenswrapper[5124]: I0126 00:13:33.972788 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:34 crc kubenswrapper[5124]: I0126 00:13:34.084618 5124 ???:1] "http: TLS handshake error from 192.168.126.11:59178: no serving certificate available for the kubelet" Jan 26 00:13:34 crc kubenswrapper[5124]: I0126 00:13:34.268483 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz"] Jan 26 00:13:34 crc kubenswrapper[5124]: I0126 00:13:34.279538 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2"] Jan 26 00:13:34 crc kubenswrapper[5124]: I0126 00:13:34.372373 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26da0b98-2814-44cd-b28b-a1b2ef0ee88e" path="/var/lib/kubelet/pods/26da0b98-2814-44cd-b28b-a1b2ef0ee88e/volumes" Jan 26 00:13:34 crc kubenswrapper[5124]: I0126 00:13:34.372940 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acdc983c-4d4e-4a1e-82a3-a137fe39882a" path="/var/lib/kubelet/pods/acdc983c-4d4e-4a1e-82a3-a137fe39882a/volumes" Jan 26 00:13:34 crc kubenswrapper[5124]: I0126 00:13:34.417329 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:35 crc kubenswrapper[5124]: I0126 00:13:35.892379 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" podUID="1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd" containerName="route-controller-manager" containerID="cri-o://1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844" gracePeriod=30 Jan 26 00:13:35 crc kubenswrapper[5124]: I0126 00:13:35.892540 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" podUID="deb50899-ae29-4ecc-b17a-98d10491e5dd" containerName="controller-manager" containerID="cri-o://0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657" gracePeriod=30 Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.301956 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.326743 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr"] Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.327256 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd" containerName="route-controller-manager" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.327280 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd" containerName="route-controller-manager" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.327416 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd" containerName="route-controller-manager" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.337320 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.341951 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr"] Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.356642 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.382773 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5584f6c956-m6vx2"] Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.383709 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="deb50899-ae29-4ecc-b17a-98d10491e5dd" containerName="controller-manager" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.383832 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb50899-ae29-4ecc-b17a-98d10491e5dd" containerName="controller-manager" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.383971 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="deb50899-ae29-4ecc-b17a-98d10491e5dd" containerName="controller-manager" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.407483 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.410808 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-client-ca\") pod \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.410875 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-config\") pod \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.411337 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-client-ca" (OuterVolumeSpecName: "client-ca") pod "1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd" (UID: "1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.411432 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-config" (OuterVolumeSpecName: "config") pod "1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd" (UID: "1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.411510 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-tmp\") pod \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.411853 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-tmp" (OuterVolumeSpecName: "tmp") pod "1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd" (UID: "1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.411921 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-serving-cert\") pod \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.412021 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vnqn\" (UniqueName: \"kubernetes.io/projected/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-kube-api-access-4vnqn\") pod \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\" (UID: \"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd\") " Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.413756 5124 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.413782 5124 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.413794 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.416965 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5584f6c956-m6vx2"] Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.425089 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-kube-api-access-4vnqn" (OuterVolumeSpecName: "kube-api-access-4vnqn") pod "1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd" (UID: "1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd"). InnerVolumeSpecName "kube-api-access-4vnqn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.425710 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd" (UID: "1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.514479 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-proxy-ca-bundles\") pod \"deb50899-ae29-4ecc-b17a-98d10491e5dd\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.514570 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/deb50899-ae29-4ecc-b17a-98d10491e5dd-tmp\") pod \"deb50899-ae29-4ecc-b17a-98d10491e5dd\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.514704 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-client-ca\") pod \"deb50899-ae29-4ecc-b17a-98d10491e5dd\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.514729 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb50899-ae29-4ecc-b17a-98d10491e5dd-serving-cert\") pod \"deb50899-ae29-4ecc-b17a-98d10491e5dd\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.514783 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-config\") pod \"deb50899-ae29-4ecc-b17a-98d10491e5dd\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.514871 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m2cd\" (UniqueName: \"kubernetes.io/projected/deb50899-ae29-4ecc-b17a-98d10491e5dd-kube-api-access-6m2cd\") pod \"deb50899-ae29-4ecc-b17a-98d10491e5dd\" (UID: \"deb50899-ae29-4ecc-b17a-98d10491e5dd\") " Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.514978 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b0bbd66-4946-4919-abca-d9db56de3882-client-ca\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.515113 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc8371bc-44cb-4ed0-98b1-03a838cbe230-tmp\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.515234 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wlf\" (UniqueName: \"kubernetes.io/projected/0b0bbd66-4946-4919-abca-d9db56de3882-kube-api-access-f7wlf\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.515316 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b0bbd66-4946-4919-abca-d9db56de3882-proxy-ca-bundles\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.515375 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-client-ca" (OuterVolumeSpecName: "client-ca") pod "deb50899-ae29-4ecc-b17a-98d10491e5dd" (UID: "deb50899-ae29-4ecc-b17a-98d10491e5dd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.515400 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb50899-ae29-4ecc-b17a-98d10491e5dd-tmp" (OuterVolumeSpecName: "tmp") pod "deb50899-ae29-4ecc-b17a-98d10491e5dd" (UID: "deb50899-ae29-4ecc-b17a-98d10491e5dd"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.515453 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8371bc-44cb-4ed0-98b1-03a838cbe230-config\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.515565 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r452d\" (UniqueName: \"kubernetes.io/projected/bc8371bc-44cb-4ed0-98b1-03a838cbe230-kube-api-access-r452d\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.515674 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b0bbd66-4946-4919-abca-d9db56de3882-tmp\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.515682 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-config" (OuterVolumeSpecName: "config") pod "deb50899-ae29-4ecc-b17a-98d10491e5dd" (UID: "deb50899-ae29-4ecc-b17a-98d10491e5dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.515720 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "deb50899-ae29-4ecc-b17a-98d10491e5dd" (UID: "deb50899-ae29-4ecc-b17a-98d10491e5dd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.515759 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8371bc-44cb-4ed0-98b1-03a838cbe230-serving-cert\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.516146 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0bbd66-4946-4919-abca-d9db56de3882-serving-cert\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.516207 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8371bc-44cb-4ed0-98b1-03a838cbe230-client-ca\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.516347 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0bbd66-4946-4919-abca-d9db56de3882-config\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.516546 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.516574 5124 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.516607 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4vnqn\" (UniqueName: \"kubernetes.io/projected/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd-kube-api-access-4vnqn\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.516620 5124 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/deb50899-ae29-4ecc-b17a-98d10491e5dd-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.516631 5124 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.516645 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deb50899-ae29-4ecc-b17a-98d10491e5dd-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.519581 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb50899-ae29-4ecc-b17a-98d10491e5dd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "deb50899-ae29-4ecc-b17a-98d10491e5dd" (UID: "deb50899-ae29-4ecc-b17a-98d10491e5dd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.519824 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb50899-ae29-4ecc-b17a-98d10491e5dd-kube-api-access-6m2cd" (OuterVolumeSpecName: "kube-api-access-6m2cd") pod "deb50899-ae29-4ecc-b17a-98d10491e5dd" (UID: "deb50899-ae29-4ecc-b17a-98d10491e5dd"). InnerVolumeSpecName "kube-api-access-6m2cd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.617987 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0bbd66-4946-4919-abca-d9db56de3882-serving-cert\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.618065 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8371bc-44cb-4ed0-98b1-03a838cbe230-client-ca\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.618102 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0bbd66-4946-4919-abca-d9db56de3882-config\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.618145 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b0bbd66-4946-4919-abca-d9db56de3882-client-ca\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.618172 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc8371bc-44cb-4ed0-98b1-03a838cbe230-tmp\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.618210 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f7wlf\" (UniqueName: \"kubernetes.io/projected/0b0bbd66-4946-4919-abca-d9db56de3882-kube-api-access-f7wlf\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.618240 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b0bbd66-4946-4919-abca-d9db56de3882-proxy-ca-bundles\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.618272 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8371bc-44cb-4ed0-98b1-03a838cbe230-config\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.618525 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r452d\" (UniqueName: \"kubernetes.io/projected/bc8371bc-44cb-4ed0-98b1-03a838cbe230-kube-api-access-r452d\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.618620 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b0bbd66-4946-4919-abca-d9db56de3882-tmp\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.618678 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8371bc-44cb-4ed0-98b1-03a838cbe230-serving-cert\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.618900 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb50899-ae29-4ecc-b17a-98d10491e5dd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.618917 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6m2cd\" (UniqueName: \"kubernetes.io/projected/deb50899-ae29-4ecc-b17a-98d10491e5dd-kube-api-access-6m2cd\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.619168 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc8371bc-44cb-4ed0-98b1-03a838cbe230-tmp\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.619192 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8371bc-44cb-4ed0-98b1-03a838cbe230-client-ca\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.620279 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b0bbd66-4946-4919-abca-d9db56de3882-client-ca\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.621307 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b0bbd66-4946-4919-abca-d9db56de3882-proxy-ca-bundles\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.622298 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0bbd66-4946-4919-abca-d9db56de3882-config\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.623195 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b0bbd66-4946-4919-abca-d9db56de3882-tmp\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.623632 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8371bc-44cb-4ed0-98b1-03a838cbe230-config\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.624237 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8371bc-44cb-4ed0-98b1-03a838cbe230-serving-cert\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.624810 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0bbd66-4946-4919-abca-d9db56de3882-serving-cert\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.641463 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r452d\" (UniqueName: \"kubernetes.io/projected/bc8371bc-44cb-4ed0-98b1-03a838cbe230-kube-api-access-r452d\") pod \"route-controller-manager-69d5d96944-7fsjr\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.647060 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7wlf\" (UniqueName: \"kubernetes.io/projected/0b0bbd66-4946-4919-abca-d9db56de3882-kube-api-access-f7wlf\") pod \"controller-manager-5584f6c956-m6vx2\" (UID: \"0b0bbd66-4946-4919-abca-d9db56de3882\") " pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.668140 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.741610 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.903856 5124 generic.go:358] "Generic (PLEG): container finished" podID="deb50899-ae29-4ecc-b17a-98d10491e5dd" containerID="0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657" exitCode=0 Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.903908 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" event={"ID":"deb50899-ae29-4ecc-b17a-98d10491e5dd","Type":"ContainerDied","Data":"0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657"} Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.903961 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" event={"ID":"deb50899-ae29-4ecc-b17a-98d10491e5dd","Type":"ContainerDied","Data":"07d9cf1ccbd27f1ce0f4336b2c7de9c664e9b3f4143ec9052b4630c7162eebb3"} Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.903991 5124 scope.go:117] "RemoveContainer" containerID="0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.903991 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.906213 5124 generic.go:358] "Generic (PLEG): container finished" podID="1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd" containerID="1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844" exitCode=0 Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.906296 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" event={"ID":"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd","Type":"ContainerDied","Data":"1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844"} Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.906319 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" event={"ID":"1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd","Type":"ContainerDied","Data":"a291730dfea24e0997fa385bea6030cbb794b3f9b705ecf5b97dc4e9fab359d6"} Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.906402 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.930736 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr"] Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.934348 5124 scope.go:117] "RemoveContainer" containerID="0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657" Jan 26 00:13:36 crc kubenswrapper[5124]: E0126 00:13:36.934993 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657\": container with ID starting with 0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657 not found: ID does not exist" containerID="0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.935038 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657"} err="failed to get container status \"0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657\": rpc error: code = NotFound desc = could not find container \"0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657\": container with ID starting with 0a5364f8725b9ae2758a04a173f865fd4accc6fae576a49fb8921d6e9f2a4657 not found: ID does not exist" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.935074 5124 scope.go:117] "RemoveContainer" containerID="1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.952778 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2"] Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.960014 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7df55c84-gnzq2"] Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.960329 5124 scope.go:117] "RemoveContainer" containerID="1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844" Jan 26 00:13:36 crc kubenswrapper[5124]: E0126 00:13:36.961033 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844\": container with ID starting with 1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844 not found: ID does not exist" containerID="1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.961090 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844"} err="failed to get container status \"1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844\": rpc error: code = NotFound desc = could not find container \"1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844\": container with ID starting with 1278a034121855ef6bf01d94437c07a03ca984b1689ac5d2035b7821616be844 not found: ID does not exist" Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.969425 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz"] Jan 26 00:13:36 crc kubenswrapper[5124]: I0126 00:13:36.973023 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-689cfd7b8c-zzdzz"] Jan 26 00:13:37 crc kubenswrapper[5124]: I0126 00:13:37.006250 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5584f6c956-m6vx2"] Jan 26 00:13:37 crc kubenswrapper[5124]: W0126 00:13:37.014029 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b0bbd66_4946_4919_abca_d9db56de3882.slice/crio-1a98c92d9463f8df0ff2b4b97f87804b7f529045f798636f12390dc403121664 WatchSource:0}: Error finding container 1a98c92d9463f8df0ff2b4b97f87804b7f529045f798636f12390dc403121664: Status 404 returned error can't find the container with id 1a98c92d9463f8df0ff2b4b97f87804b7f529045f798636f12390dc403121664 Jan 26 00:13:37 crc kubenswrapper[5124]: I0126 00:13:37.932310 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" event={"ID":"0b0bbd66-4946-4919-abca-d9db56de3882","Type":"ContainerStarted","Data":"16984df9898d9390b83e8b8622f33508c208a2d2dabf96446a3f2aa587f32cfe"} Jan 26 00:13:37 crc kubenswrapper[5124]: I0126 00:13:37.932795 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" event={"ID":"0b0bbd66-4946-4919-abca-d9db56de3882","Type":"ContainerStarted","Data":"1a98c92d9463f8df0ff2b4b97f87804b7f529045f798636f12390dc403121664"} Jan 26 00:13:37 crc kubenswrapper[5124]: I0126 00:13:37.933390 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:37 crc kubenswrapper[5124]: I0126 00:13:37.934530 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" event={"ID":"bc8371bc-44cb-4ed0-98b1-03a838cbe230","Type":"ContainerStarted","Data":"0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63"} Jan 26 00:13:37 crc kubenswrapper[5124]: I0126 00:13:37.934657 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" event={"ID":"bc8371bc-44cb-4ed0-98b1-03a838cbe230","Type":"ContainerStarted","Data":"357c449e7d8a40de4239e314b7edb5ba11eb5b73aa827a436af524f0a1e5c5c2"} Jan 26 00:13:37 crc kubenswrapper[5124]: I0126 00:13:37.935474 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:37 crc kubenswrapper[5124]: I0126 00:13:37.940775 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:13:37 crc kubenswrapper[5124]: I0126 00:13:37.961028 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" podStartSLOduration=3.96100981 podStartE2EDuration="3.96100981s" podCreationTimestamp="2026-01-26 00:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:13:37.956968768 +0000 UTC m=+295.865888117" watchObservedRunningTime="2026-01-26 00:13:37.96100981 +0000 UTC m=+295.869929149" Jan 26 00:13:38 crc kubenswrapper[5124]: I0126 00:13:38.030621 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5584f6c956-m6vx2" Jan 26 00:13:38 crc kubenswrapper[5124]: I0126 00:13:38.051923 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" podStartSLOduration=4.051890463 podStartE2EDuration="4.051890463s" podCreationTimestamp="2026-01-26 00:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:13:37.980393698 +0000 UTC m=+295.889313077" watchObservedRunningTime="2026-01-26 00:13:38.051890463 +0000 UTC m=+295.960809852" Jan 26 00:13:38 crc kubenswrapper[5124]: I0126 00:13:38.370871 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd" path="/var/lib/kubelet/pods/1d8fc130-cec0-4184-a7c4-b17ed8ebe0bd/volumes" Jan 26 00:13:38 crc kubenswrapper[5124]: I0126 00:13:38.371410 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb50899-ae29-4ecc-b17a-98d10491e5dd" path="/var/lib/kubelet/pods/deb50899-ae29-4ecc-b17a-98d10491e5dd/volumes" Jan 26 00:13:40 crc kubenswrapper[5124]: I0126 00:13:40.831088 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:13:40 crc kubenswrapper[5124]: I0126 00:13:40.831817 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:13:40 crc kubenswrapper[5124]: I0126 00:13:40.831883 5124 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:13:40 crc kubenswrapper[5124]: I0126 00:13:40.832636 5124 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d83d6e9dbee8896d25299332774ac25503be88561fd1040886735c806d9b1d94"} pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:13:40 crc kubenswrapper[5124]: I0126 00:13:40.832718 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" containerID="cri-o://d83d6e9dbee8896d25299332774ac25503be88561fd1040886735c806d9b1d94" gracePeriod=600 Jan 26 00:13:41 crc kubenswrapper[5124]: I0126 00:13:41.959997 5124 generic.go:358] "Generic (PLEG): container finished" podID="95fa0656-150a-4d93-a324-77a1306d91f7" containerID="d83d6e9dbee8896d25299332774ac25503be88561fd1040886735c806d9b1d94" exitCode=0 Jan 26 00:13:41 crc kubenswrapper[5124]: I0126 00:13:41.960064 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerDied","Data":"d83d6e9dbee8896d25299332774ac25503be88561fd1040886735c806d9b1d94"} Jan 26 00:13:41 crc kubenswrapper[5124]: I0126 00:13:41.960624 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerStarted","Data":"6d673794be664ea88f97aff7d6202b405eb46b2e426b73ffc27f0c5fba62377f"} Jan 26 00:13:42 crc kubenswrapper[5124]: I0126 00:13:42.521599 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:13:42 crc kubenswrapper[5124]: I0126 00:13:42.524198 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:13:47 crc kubenswrapper[5124]: I0126 00:13:46.999632 5124 generic.go:358] "Generic (PLEG): container finished" podID="036651d1-0c52-4454-8385-bf3f84e19378" containerID="4ec23919a4d2a52dfa0dbf421a59683bbafd03cfb39a7902caecdea880479745" exitCode=0 Jan 26 00:13:47 crc kubenswrapper[5124]: I0126 00:13:46.999730 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-dm2tt" event={"ID":"036651d1-0c52-4454-8385-bf3f84e19378","Type":"ContainerDied","Data":"4ec23919a4d2a52dfa0dbf421a59683bbafd03cfb39a7902caecdea880479745"} Jan 26 00:13:48 crc kubenswrapper[5124]: I0126 00:13:48.398212 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-dm2tt" Jan 26 00:13:48 crc kubenswrapper[5124]: I0126 00:13:48.509885 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trhq6\" (UniqueName: \"kubernetes.io/projected/036651d1-0c52-4454-8385-bf3f84e19378-kube-api-access-trhq6\") pod \"036651d1-0c52-4454-8385-bf3f84e19378\" (UID: \"036651d1-0c52-4454-8385-bf3f84e19378\") " Jan 26 00:13:48 crc kubenswrapper[5124]: I0126 00:13:48.511351 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/036651d1-0c52-4454-8385-bf3f84e19378-serviceca\") pod \"036651d1-0c52-4454-8385-bf3f84e19378\" (UID: \"036651d1-0c52-4454-8385-bf3f84e19378\") " Jan 26 00:13:48 crc kubenswrapper[5124]: I0126 00:13:48.511944 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/036651d1-0c52-4454-8385-bf3f84e19378-serviceca" (OuterVolumeSpecName: "serviceca") pod "036651d1-0c52-4454-8385-bf3f84e19378" (UID: "036651d1-0c52-4454-8385-bf3f84e19378"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:48 crc kubenswrapper[5124]: I0126 00:13:48.512497 5124 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/036651d1-0c52-4454-8385-bf3f84e19378-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:48 crc kubenswrapper[5124]: I0126 00:13:48.515954 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/036651d1-0c52-4454-8385-bf3f84e19378-kube-api-access-trhq6" (OuterVolumeSpecName: "kube-api-access-trhq6") pod "036651d1-0c52-4454-8385-bf3f84e19378" (UID: "036651d1-0c52-4454-8385-bf3f84e19378"). InnerVolumeSpecName "kube-api-access-trhq6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:48 crc kubenswrapper[5124]: I0126 00:13:48.613523 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-trhq6\" (UniqueName: \"kubernetes.io/projected/036651d1-0c52-4454-8385-bf3f84e19378-kube-api-access-trhq6\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:49 crc kubenswrapper[5124]: I0126 00:13:49.013117 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-dm2tt" Jan 26 00:13:49 crc kubenswrapper[5124]: I0126 00:13:49.013131 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-dm2tt" event={"ID":"036651d1-0c52-4454-8385-bf3f84e19378","Type":"ContainerDied","Data":"66d5ec531b195f62c6975c8f1db431517f68279cedce5911ec20021b748018b0"} Jan 26 00:13:49 crc kubenswrapper[5124]: I0126 00:13:49.013507 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66d5ec531b195f62c6975c8f1db431517f68279cedce5911ec20021b748018b0" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.118038 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr"] Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.119178 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" podUID="bc8371bc-44cb-4ed0-98b1-03a838cbe230" containerName="route-controller-manager" containerID="cri-o://0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63" gracePeriod=30 Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.614094 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.646159 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75699db95-sr859"] Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.647645 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bc8371bc-44cb-4ed0-98b1-03a838cbe230" containerName="route-controller-manager" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.647674 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8371bc-44cb-4ed0-98b1-03a838cbe230" containerName="route-controller-manager" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.647729 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="036651d1-0c52-4454-8385-bf3f84e19378" containerName="image-pruner" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.647740 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="036651d1-0c52-4454-8385-bf3f84e19378" containerName="image-pruner" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.647898 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="bc8371bc-44cb-4ed0-98b1-03a838cbe230" containerName="route-controller-manager" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.647936 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="036651d1-0c52-4454-8385-bf3f84e19378" containerName="image-pruner" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.654953 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75699db95-sr859"] Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.655125 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.769771 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc8371bc-44cb-4ed0-98b1-03a838cbe230-tmp\") pod \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.770148 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8371bc-44cb-4ed0-98b1-03a838cbe230-config\") pod \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.770179 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8371bc-44cb-4ed0-98b1-03a838cbe230-client-ca\") pod \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.770217 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8371bc-44cb-4ed0-98b1-03a838cbe230-serving-cert\") pod \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.770237 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r452d\" (UniqueName: \"kubernetes.io/projected/bc8371bc-44cb-4ed0-98b1-03a838cbe230-kube-api-access-r452d\") pod \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\" (UID: \"bc8371bc-44cb-4ed0-98b1-03a838cbe230\") " Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.770315 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a266906e-5890-42f5-a420-5f9476252c9d-tmp\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.770345 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a266906e-5890-42f5-a420-5f9476252c9d-client-ca\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.770367 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a266906e-5890-42f5-a420-5f9476252c9d-serving-cert\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.770417 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czf6l\" (UniqueName: \"kubernetes.io/projected/a266906e-5890-42f5-a420-5f9476252c9d-kube-api-access-czf6l\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.770453 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a266906e-5890-42f5-a420-5f9476252c9d-config\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.771177 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8371bc-44cb-4ed0-98b1-03a838cbe230-config" (OuterVolumeSpecName: "config") pod "bc8371bc-44cb-4ed0-98b1-03a838cbe230" (UID: "bc8371bc-44cb-4ed0-98b1-03a838cbe230"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.771255 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8371bc-44cb-4ed0-98b1-03a838cbe230-client-ca" (OuterVolumeSpecName: "client-ca") pod "bc8371bc-44cb-4ed0-98b1-03a838cbe230" (UID: "bc8371bc-44cb-4ed0-98b1-03a838cbe230"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.771447 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc8371bc-44cb-4ed0-98b1-03a838cbe230-tmp" (OuterVolumeSpecName: "tmp") pod "bc8371bc-44cb-4ed0-98b1-03a838cbe230" (UID: "bc8371bc-44cb-4ed0-98b1-03a838cbe230"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.775932 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8371bc-44cb-4ed0-98b1-03a838cbe230-kube-api-access-r452d" (OuterVolumeSpecName: "kube-api-access-r452d") pod "bc8371bc-44cb-4ed0-98b1-03a838cbe230" (UID: "bc8371bc-44cb-4ed0-98b1-03a838cbe230"). InnerVolumeSpecName "kube-api-access-r452d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.776515 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8371bc-44cb-4ed0-98b1-03a838cbe230-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc8371bc-44cb-4ed0-98b1-03a838cbe230" (UID: "bc8371bc-44cb-4ed0-98b1-03a838cbe230"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.871568 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-czf6l\" (UniqueName: \"kubernetes.io/projected/a266906e-5890-42f5-a420-5f9476252c9d-kube-api-access-czf6l\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.871722 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a266906e-5890-42f5-a420-5f9476252c9d-config\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.871818 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a266906e-5890-42f5-a420-5f9476252c9d-tmp\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.871854 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a266906e-5890-42f5-a420-5f9476252c9d-client-ca\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.871892 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a266906e-5890-42f5-a420-5f9476252c9d-serving-cert\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.871948 5124 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc8371bc-44cb-4ed0-98b1-03a838cbe230-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.871966 5124 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8371bc-44cb-4ed0-98b1-03a838cbe230-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.871981 5124 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8371bc-44cb-4ed0-98b1-03a838cbe230-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.871998 5124 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8371bc-44cb-4ed0-98b1-03a838cbe230-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.872014 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r452d\" (UniqueName: \"kubernetes.io/projected/bc8371bc-44cb-4ed0-98b1-03a838cbe230-kube-api-access-r452d\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.872421 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a266906e-5890-42f5-a420-5f9476252c9d-tmp\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.873280 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a266906e-5890-42f5-a420-5f9476252c9d-config\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.873363 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a266906e-5890-42f5-a420-5f9476252c9d-client-ca\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.876915 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a266906e-5890-42f5-a420-5f9476252c9d-serving-cert\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.887023 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-czf6l\" (UniqueName: \"kubernetes.io/projected/a266906e-5890-42f5-a420-5f9476252c9d-kube-api-access-czf6l\") pod \"route-controller-manager-75699db95-sr859\" (UID: \"a266906e-5890-42f5-a420-5f9476252c9d\") " pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:12 crc kubenswrapper[5124]: I0126 00:14:12.972852 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:13 crc kubenswrapper[5124]: I0126 00:14:13.156770 5124 generic.go:358] "Generic (PLEG): container finished" podID="bc8371bc-44cb-4ed0-98b1-03a838cbe230" containerID="0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63" exitCode=0 Jan 26 00:14:13 crc kubenswrapper[5124]: I0126 00:14:13.156854 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" Jan 26 00:14:13 crc kubenswrapper[5124]: I0126 00:14:13.156880 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" event={"ID":"bc8371bc-44cb-4ed0-98b1-03a838cbe230","Type":"ContainerDied","Data":"0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63"} Jan 26 00:14:13 crc kubenswrapper[5124]: I0126 00:14:13.157340 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr" event={"ID":"bc8371bc-44cb-4ed0-98b1-03a838cbe230","Type":"ContainerDied","Data":"357c449e7d8a40de4239e314b7edb5ba11eb5b73aa827a436af524f0a1e5c5c2"} Jan 26 00:14:13 crc kubenswrapper[5124]: I0126 00:14:13.157476 5124 scope.go:117] "RemoveContainer" containerID="0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63" Jan 26 00:14:13 crc kubenswrapper[5124]: I0126 00:14:13.188833 5124 scope.go:117] "RemoveContainer" containerID="0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63" Jan 26 00:14:13 crc kubenswrapper[5124]: E0126 00:14:13.190288 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63\": container with ID starting with 0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63 not found: ID does not exist" containerID="0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63" Jan 26 00:14:13 crc kubenswrapper[5124]: I0126 00:14:13.190352 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63"} err="failed to get container status \"0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63\": rpc error: code = NotFound desc = could not find container \"0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63\": container with ID starting with 0d4c7f8cb5c3b39febbd7444e76897db43094b7714c507730dfb58cd03e2ae63 not found: ID does not exist" Jan 26 00:14:13 crc kubenswrapper[5124]: I0126 00:14:13.195921 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr"] Jan 26 00:14:13 crc kubenswrapper[5124]: I0126 00:14:13.202420 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d5d96944-7fsjr"] Jan 26 00:14:13 crc kubenswrapper[5124]: I0126 00:14:13.447867 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75699db95-sr859"] Jan 26 00:14:13 crc kubenswrapper[5124]: I0126 00:14:13.461398 5124 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:14:14 crc kubenswrapper[5124]: I0126 00:14:14.166804 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" event={"ID":"a266906e-5890-42f5-a420-5f9476252c9d","Type":"ContainerStarted","Data":"bd1bd5226fa4a2ad67d969bcc0f9422e31db50cc22818e9c9502e625ad9ad70b"} Jan 26 00:14:14 crc kubenswrapper[5124]: I0126 00:14:14.167199 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:14 crc kubenswrapper[5124]: I0126 00:14:14.167212 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" event={"ID":"a266906e-5890-42f5-a420-5f9476252c9d","Type":"ContainerStarted","Data":"c763bc0604ed0f876ded4e34aab13e80a4644ff60fc6787bc2272cea2619c270"} Jan 26 00:14:14 crc kubenswrapper[5124]: I0126 00:14:14.181495 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" Jan 26 00:14:14 crc kubenswrapper[5124]: I0126 00:14:14.204392 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75699db95-sr859" podStartSLOduration=2.204377165 podStartE2EDuration="2.204377165s" podCreationTimestamp="2026-01-26 00:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:14:14.197493036 +0000 UTC m=+332.106412445" watchObservedRunningTime="2026-01-26 00:14:14.204377165 +0000 UTC m=+332.113296514" Jan 26 00:14:14 crc kubenswrapper[5124]: I0126 00:14:14.372372 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8371bc-44cb-4ed0-98b1-03a838cbe230" path="/var/lib/kubelet/pods/bc8371bc-44cb-4ed0-98b1-03a838cbe230/volumes" Jan 26 00:14:18 crc kubenswrapper[5124]: I0126 00:14:18.806935 5124 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.882349 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jk654"] Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.884284 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jk654" podUID="93d4050c-d7fd-40b6-bd58-133f961c4077" containerName="registry-server" containerID="cri-o://4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f" gracePeriod=30 Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.908892 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shkmx"] Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.912061 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-shkmx" podUID="5ec6118e-bf44-44b1-8098-637ebd0083f7" containerName="registry-server" containerID="cri-o://c76f97227b6c37dc8fa602630009930435dc45b0801d70920b6538fb8dc1cb5c" gracePeriod=30 Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.942841 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5hwt4"] Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.943164 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" podUID="973d580d-7e62-419e-be96-115733ca98bf" containerName="marketplace-operator" containerID="cri-o://a544e3ba1690c2df00e3fec1ddda712f20e83b51cbd2413032be903a9db9297b" gracePeriod=30 Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.948442 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4898t"] Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.948907 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4898t" podUID="67b1669f-4753-4b71-bf6f-3b1972f4f33d" containerName="registry-server" containerID="cri-o://19abfeb851e1ad15dae47c652f6d05276eef1067a9497556f6d532afe731a544" gracePeriod=30 Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.956455 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-btzrz"] Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.963850 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7m58f"] Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.964161 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7m58f" podUID="beb215dd-478e-4b23-b77c-5e741e026932" containerName="registry-server" containerID="cri-o://d636d8c930a6ef7a4d8bca6d30375e240339be66dd74a2341d580b7a669d96e8" gracePeriod=30 Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.964177 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:36 crc kubenswrapper[5124]: I0126 00:14:36.964957 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-btzrz"] Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.068598 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5bd59477-0d46-4047-a6b5-094ec66407f4-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-btzrz\" (UID: \"5bd59477-0d46-4047-a6b5-094ec66407f4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.068669 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c8sk\" (UniqueName: \"kubernetes.io/projected/5bd59477-0d46-4047-a6b5-094ec66407f4-kube-api-access-7c8sk\") pod \"marketplace-operator-547dbd544d-btzrz\" (UID: \"5bd59477-0d46-4047-a6b5-094ec66407f4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.068707 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd59477-0d46-4047-a6b5-094ec66407f4-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-btzrz\" (UID: \"5bd59477-0d46-4047-a6b5-094ec66407f4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.068800 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5bd59477-0d46-4047-a6b5-094ec66407f4-tmp\") pod \"marketplace-operator-547dbd544d-btzrz\" (UID: \"5bd59477-0d46-4047-a6b5-094ec66407f4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.169890 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5bd59477-0d46-4047-a6b5-094ec66407f4-tmp\") pod \"marketplace-operator-547dbd544d-btzrz\" (UID: \"5bd59477-0d46-4047-a6b5-094ec66407f4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.169956 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5bd59477-0d46-4047-a6b5-094ec66407f4-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-btzrz\" (UID: \"5bd59477-0d46-4047-a6b5-094ec66407f4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.169997 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7c8sk\" (UniqueName: \"kubernetes.io/projected/5bd59477-0d46-4047-a6b5-094ec66407f4-kube-api-access-7c8sk\") pod \"marketplace-operator-547dbd544d-btzrz\" (UID: \"5bd59477-0d46-4047-a6b5-094ec66407f4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.170452 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5bd59477-0d46-4047-a6b5-094ec66407f4-tmp\") pod \"marketplace-operator-547dbd544d-btzrz\" (UID: \"5bd59477-0d46-4047-a6b5-094ec66407f4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.170032 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd59477-0d46-4047-a6b5-094ec66407f4-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-btzrz\" (UID: \"5bd59477-0d46-4047-a6b5-094ec66407f4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.171350 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd59477-0d46-4047-a6b5-094ec66407f4-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-btzrz\" (UID: \"5bd59477-0d46-4047-a6b5-094ec66407f4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.181031 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5bd59477-0d46-4047-a6b5-094ec66407f4-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-btzrz\" (UID: \"5bd59477-0d46-4047-a6b5-094ec66407f4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.192072 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c8sk\" (UniqueName: \"kubernetes.io/projected/5bd59477-0d46-4047-a6b5-094ec66407f4-kube-api-access-7c8sk\") pod \"marketplace-operator-547dbd544d-btzrz\" (UID: \"5bd59477-0d46-4047-a6b5-094ec66407f4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.241356 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.332207 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.352908 5124 generic.go:358] "Generic (PLEG): container finished" podID="67b1669f-4753-4b71-bf6f-3b1972f4f33d" containerID="19abfeb851e1ad15dae47c652f6d05276eef1067a9497556f6d532afe731a544" exitCode=0 Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.353108 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4898t" event={"ID":"67b1669f-4753-4b71-bf6f-3b1972f4f33d","Type":"ContainerDied","Data":"19abfeb851e1ad15dae47c652f6d05276eef1067a9497556f6d532afe731a544"} Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.367177 5124 generic.go:358] "Generic (PLEG): container finished" podID="93d4050c-d7fd-40b6-bd58-133f961c4077" containerID="4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f" exitCode=0 Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.367306 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk654" event={"ID":"93d4050c-d7fd-40b6-bd58-133f961c4077","Type":"ContainerDied","Data":"4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f"} Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.367343 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk654" event={"ID":"93d4050c-d7fd-40b6-bd58-133f961c4077","Type":"ContainerDied","Data":"c0fc93185bafc71ea165ce4feeb39bee289bb60989a3f867b1ad39aa1a2721fc"} Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.367365 5124 scope.go:117] "RemoveContainer" containerID="4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.367542 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jk654" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.409719 5124 generic.go:358] "Generic (PLEG): container finished" podID="5ec6118e-bf44-44b1-8098-637ebd0083f7" containerID="c76f97227b6c37dc8fa602630009930435dc45b0801d70920b6538fb8dc1cb5c" exitCode=0 Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.409879 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shkmx" event={"ID":"5ec6118e-bf44-44b1-8098-637ebd0083f7","Type":"ContainerDied","Data":"c76f97227b6c37dc8fa602630009930435dc45b0801d70920b6538fb8dc1cb5c"} Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.410507 5124 scope.go:117] "RemoveContainer" containerID="12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.417970 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.419270 5124 generic.go:358] "Generic (PLEG): container finished" podID="beb215dd-478e-4b23-b77c-5e741e026932" containerID="d636d8c930a6ef7a4d8bca6d30375e240339be66dd74a2341d580b7a669d96e8" exitCode=0 Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.419430 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m58f" event={"ID":"beb215dd-478e-4b23-b77c-5e741e026932","Type":"ContainerDied","Data":"d636d8c930a6ef7a4d8bca6d30375e240339be66dd74a2341d580b7a669d96e8"} Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.426286 5124 generic.go:358] "Generic (PLEG): container finished" podID="973d580d-7e62-419e-be96-115733ca98bf" containerID="a544e3ba1690c2df00e3fec1ddda712f20e83b51cbd2413032be903a9db9297b" exitCode=0 Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.426375 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" event={"ID":"973d580d-7e62-419e-be96-115733ca98bf","Type":"ContainerDied","Data":"a544e3ba1690c2df00e3fec1ddda712f20e83b51cbd2413032be903a9db9297b"} Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.426803 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.429300 5124 scope.go:117] "RemoveContainer" containerID="962a60ef248cf1ab7721f0a553e5c60c33592ac0de743d71080fa266306631e9" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.455843 5124 scope.go:117] "RemoveContainer" containerID="4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f" Jan 26 00:14:37 crc kubenswrapper[5124]: E0126 00:14:37.458965 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f\": container with ID starting with 4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f not found: ID does not exist" containerID="4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.459152 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f"} err="failed to get container status \"4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f\": rpc error: code = NotFound desc = could not find container \"4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f\": container with ID starting with 4d01b14e6d6288adddf228135c1fd3d03a51f5753f4f3146556b825b89382b9f not found: ID does not exist" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.459187 5124 scope.go:117] "RemoveContainer" containerID="12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060" Jan 26 00:14:37 crc kubenswrapper[5124]: E0126 00:14:37.460052 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060\": container with ID starting with 12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060 not found: ID does not exist" containerID="12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.460081 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060"} err="failed to get container status \"12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060\": rpc error: code = NotFound desc = could not find container \"12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060\": container with ID starting with 12cb207cfb6cc3e8609fb618fbd895f956194d970049a203dcd54e52b273a060 not found: ID does not exist" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.460098 5124 scope.go:117] "RemoveContainer" containerID="962a60ef248cf1ab7721f0a553e5c60c33592ac0de743d71080fa266306631e9" Jan 26 00:14:37 crc kubenswrapper[5124]: E0126 00:14:37.460521 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"962a60ef248cf1ab7721f0a553e5c60c33592ac0de743d71080fa266306631e9\": container with ID starting with 962a60ef248cf1ab7721f0a553e5c60c33592ac0de743d71080fa266306631e9 not found: ID does not exist" containerID="962a60ef248cf1ab7721f0a553e5c60c33592ac0de743d71080fa266306631e9" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.460551 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962a60ef248cf1ab7721f0a553e5c60c33592ac0de743d71080fa266306631e9"} err="failed to get container status \"962a60ef248cf1ab7721f0a553e5c60c33592ac0de743d71080fa266306631e9\": rpc error: code = NotFound desc = could not find container \"962a60ef248cf1ab7721f0a553e5c60c33592ac0de743d71080fa266306631e9\": container with ID starting with 962a60ef248cf1ab7721f0a553e5c60c33592ac0de743d71080fa266306631e9 not found: ID does not exist" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.460568 5124 scope.go:117] "RemoveContainer" containerID="8e9091f8fed28f88cf73c06f29899ff7362d84ec97673a79cb6fcebd3feb183a" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.473844 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93d4050c-d7fd-40b6-bd58-133f961c4077-catalog-content\") pod \"93d4050c-d7fd-40b6-bd58-133f961c4077\" (UID: \"93d4050c-d7fd-40b6-bd58-133f961c4077\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.473946 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93d4050c-d7fd-40b6-bd58-133f961c4077-utilities\") pod \"93d4050c-d7fd-40b6-bd58-133f961c4077\" (UID: \"93d4050c-d7fd-40b6-bd58-133f961c4077\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.473987 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf8pl\" (UniqueName: \"kubernetes.io/projected/93d4050c-d7fd-40b6-bd58-133f961c4077-kube-api-access-cf8pl\") pod \"93d4050c-d7fd-40b6-bd58-133f961c4077\" (UID: \"93d4050c-d7fd-40b6-bd58-133f961c4077\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.475058 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93d4050c-d7fd-40b6-bd58-133f961c4077-utilities" (OuterVolumeSpecName: "utilities") pod "93d4050c-d7fd-40b6-bd58-133f961c4077" (UID: "93d4050c-d7fd-40b6-bd58-133f961c4077"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.480329 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d4050c-d7fd-40b6-bd58-133f961c4077-kube-api-access-cf8pl" (OuterVolumeSpecName: "kube-api-access-cf8pl") pod "93d4050c-d7fd-40b6-bd58-133f961c4077" (UID: "93d4050c-d7fd-40b6-bd58-133f961c4077"). InnerVolumeSpecName "kube-api-access-cf8pl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.510664 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93d4050c-d7fd-40b6-bd58-133f961c4077-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93d4050c-d7fd-40b6-bd58-133f961c4077" (UID: "93d4050c-d7fd-40b6-bd58-133f961c4077"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.514220 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.519090 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.574991 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ec6118e-bf44-44b1-8098-637ebd0083f7-catalog-content\") pod \"5ec6118e-bf44-44b1-8098-637ebd0083f7\" (UID: \"5ec6118e-bf44-44b1-8098-637ebd0083f7\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.575036 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clbcp\" (UniqueName: \"kubernetes.io/projected/973d580d-7e62-419e-be96-115733ca98bf-kube-api-access-clbcp\") pod \"973d580d-7e62-419e-be96-115733ca98bf\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.575076 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ec6118e-bf44-44b1-8098-637ebd0083f7-utilities\") pod \"5ec6118e-bf44-44b1-8098-637ebd0083f7\" (UID: \"5ec6118e-bf44-44b1-8098-637ebd0083f7\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.575097 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/973d580d-7e62-419e-be96-115733ca98bf-marketplace-trusted-ca\") pod \"973d580d-7e62-419e-be96-115733ca98bf\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.575131 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/973d580d-7e62-419e-be96-115733ca98bf-tmp\") pod \"973d580d-7e62-419e-be96-115733ca98bf\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.575196 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj67x\" (UniqueName: \"kubernetes.io/projected/5ec6118e-bf44-44b1-8098-637ebd0083f7-kube-api-access-pj67x\") pod \"5ec6118e-bf44-44b1-8098-637ebd0083f7\" (UID: \"5ec6118e-bf44-44b1-8098-637ebd0083f7\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.575242 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/973d580d-7e62-419e-be96-115733ca98bf-marketplace-operator-metrics\") pod \"973d580d-7e62-419e-be96-115733ca98bf\" (UID: \"973d580d-7e62-419e-be96-115733ca98bf\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.575411 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93d4050c-d7fd-40b6-bd58-133f961c4077-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.575428 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93d4050c-d7fd-40b6-bd58-133f961c4077-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.575436 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cf8pl\" (UniqueName: \"kubernetes.io/projected/93d4050c-d7fd-40b6-bd58-133f961c4077-kube-api-access-cf8pl\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.576914 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/973d580d-7e62-419e-be96-115733ca98bf-tmp" (OuterVolumeSpecName: "tmp") pod "973d580d-7e62-419e-be96-115733ca98bf" (UID: "973d580d-7e62-419e-be96-115733ca98bf"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.577129 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973d580d-7e62-419e-be96-115733ca98bf-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "973d580d-7e62-419e-be96-115733ca98bf" (UID: "973d580d-7e62-419e-be96-115733ca98bf"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.577382 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ec6118e-bf44-44b1-8098-637ebd0083f7-utilities" (OuterVolumeSpecName: "utilities") pod "5ec6118e-bf44-44b1-8098-637ebd0083f7" (UID: "5ec6118e-bf44-44b1-8098-637ebd0083f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.579432 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/973d580d-7e62-419e-be96-115733ca98bf-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "973d580d-7e62-419e-be96-115733ca98bf" (UID: "973d580d-7e62-419e-be96-115733ca98bf"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.580060 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec6118e-bf44-44b1-8098-637ebd0083f7-kube-api-access-pj67x" (OuterVolumeSpecName: "kube-api-access-pj67x") pod "5ec6118e-bf44-44b1-8098-637ebd0083f7" (UID: "5ec6118e-bf44-44b1-8098-637ebd0083f7"). InnerVolumeSpecName "kube-api-access-pj67x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.581026 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973d580d-7e62-419e-be96-115733ca98bf-kube-api-access-clbcp" (OuterVolumeSpecName: "kube-api-access-clbcp") pod "973d580d-7e62-419e-be96-115733ca98bf" (UID: "973d580d-7e62-419e-be96-115733ca98bf"). InnerVolumeSpecName "kube-api-access-clbcp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.622539 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ec6118e-bf44-44b1-8098-637ebd0083f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ec6118e-bf44-44b1-8098-637ebd0083f7" (UID: "5ec6118e-bf44-44b1-8098-637ebd0083f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.676424 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b1669f-4753-4b71-bf6f-3b1972f4f33d-utilities\") pod \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\" (UID: \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.676537 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb215dd-478e-4b23-b77c-5e741e026932-catalog-content\") pod \"beb215dd-478e-4b23-b77c-5e741e026932\" (UID: \"beb215dd-478e-4b23-b77c-5e741e026932\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.676617 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jprtm\" (UniqueName: \"kubernetes.io/projected/67b1669f-4753-4b71-bf6f-3b1972f4f33d-kube-api-access-jprtm\") pod \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\" (UID: \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.676675 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b1669f-4753-4b71-bf6f-3b1972f4f33d-catalog-content\") pod \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\" (UID: \"67b1669f-4753-4b71-bf6f-3b1972f4f33d\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.676719 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb215dd-478e-4b23-b77c-5e741e026932-utilities\") pod \"beb215dd-478e-4b23-b77c-5e741e026932\" (UID: \"beb215dd-478e-4b23-b77c-5e741e026932\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.676753 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf772\" (UniqueName: \"kubernetes.io/projected/beb215dd-478e-4b23-b77c-5e741e026932-kube-api-access-lf772\") pod \"beb215dd-478e-4b23-b77c-5e741e026932\" (UID: \"beb215dd-478e-4b23-b77c-5e741e026932\") " Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.676954 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pj67x\" (UniqueName: \"kubernetes.io/projected/5ec6118e-bf44-44b1-8098-637ebd0083f7-kube-api-access-pj67x\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.676975 5124 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/973d580d-7e62-419e-be96-115733ca98bf-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.676984 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ec6118e-bf44-44b1-8098-637ebd0083f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.676993 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-clbcp\" (UniqueName: \"kubernetes.io/projected/973d580d-7e62-419e-be96-115733ca98bf-kube-api-access-clbcp\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.677003 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ec6118e-bf44-44b1-8098-637ebd0083f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.677013 5124 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/973d580d-7e62-419e-be96-115733ca98bf-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.677023 5124 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/973d580d-7e62-419e-be96-115733ca98bf-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.679628 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb215dd-478e-4b23-b77c-5e741e026932-kube-api-access-lf772" (OuterVolumeSpecName: "kube-api-access-lf772") pod "beb215dd-478e-4b23-b77c-5e741e026932" (UID: "beb215dd-478e-4b23-b77c-5e741e026932"). InnerVolumeSpecName "kube-api-access-lf772". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.680475 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67b1669f-4753-4b71-bf6f-3b1972f4f33d-kube-api-access-jprtm" (OuterVolumeSpecName: "kube-api-access-jprtm") pod "67b1669f-4753-4b71-bf6f-3b1972f4f33d" (UID: "67b1669f-4753-4b71-bf6f-3b1972f4f33d"). InnerVolumeSpecName "kube-api-access-jprtm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.681417 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67b1669f-4753-4b71-bf6f-3b1972f4f33d-utilities" (OuterVolumeSpecName: "utilities") pod "67b1669f-4753-4b71-bf6f-3b1972f4f33d" (UID: "67b1669f-4753-4b71-bf6f-3b1972f4f33d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.691672 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beb215dd-478e-4b23-b77c-5e741e026932-utilities" (OuterVolumeSpecName: "utilities") pod "beb215dd-478e-4b23-b77c-5e741e026932" (UID: "beb215dd-478e-4b23-b77c-5e741e026932"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.695325 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67b1669f-4753-4b71-bf6f-3b1972f4f33d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67b1669f-4753-4b71-bf6f-3b1972f4f33d" (UID: "67b1669f-4753-4b71-bf6f-3b1972f4f33d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.721644 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jk654"] Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.724951 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jk654"] Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.776437 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-btzrz"] Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.783851 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b1669f-4753-4b71-bf6f-3b1972f4f33d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.783878 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb215dd-478e-4b23-b77c-5e741e026932-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.784136 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lf772\" (UniqueName: \"kubernetes.io/projected/beb215dd-478e-4b23-b77c-5e741e026932-kube-api-access-lf772\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.784153 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b1669f-4753-4b71-bf6f-3b1972f4f33d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.784163 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jprtm\" (UniqueName: \"kubernetes.io/projected/67b1669f-4753-4b71-bf6f-3b1972f4f33d-kube-api-access-jprtm\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.789689 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beb215dd-478e-4b23-b77c-5e741e026932-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "beb215dd-478e-4b23-b77c-5e741e026932" (UID: "beb215dd-478e-4b23-b77c-5e741e026932"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:14:37 crc kubenswrapper[5124]: I0126 00:14:37.885685 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb215dd-478e-4b23-b77c-5e741e026932-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.372058 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d4050c-d7fd-40b6-bd58-133f961c4077" path="/var/lib/kubelet/pods/93d4050c-d7fd-40b6-bd58-133f961c4077/volumes" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.434168 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shkmx" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.434166 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shkmx" event={"ID":"5ec6118e-bf44-44b1-8098-637ebd0083f7","Type":"ContainerDied","Data":"99b9d8eac78e81d3e816527e0264d0b9a587cb8f8e12b6d81af1f7c75f908bb8"} Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.434329 5124 scope.go:117] "RemoveContainer" containerID="c76f97227b6c37dc8fa602630009930435dc45b0801d70920b6538fb8dc1cb5c" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.437809 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" event={"ID":"5bd59477-0d46-4047-a6b5-094ec66407f4","Type":"ContainerStarted","Data":"6e9711a158703b76b3c34b4d66feeaab0a2620648065ee19709d185898215d7a"} Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.437865 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" event={"ID":"5bd59477-0d46-4047-a6b5-094ec66407f4","Type":"ContainerStarted","Data":"1ec57107dd61c29ca36f9b46de63da8d18555c6a6c13c32741257f80b3b1e945"} Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.438845 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.445117 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.447159 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m58f" event={"ID":"beb215dd-478e-4b23-b77c-5e741e026932","Type":"ContainerDied","Data":"2e984ef118349a8feef1f21a6a3ee57d7b6fe636ac627412c33ea58a2510f7f1"} Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.447272 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m58f" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.449012 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.449316 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5hwt4" event={"ID":"973d580d-7e62-419e-be96-115733ca98bf","Type":"ContainerDied","Data":"e09a49c0f2cca84d58fcef42008b09f8dd94517e0d7ae07b317ca592bd050d97"} Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.450624 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4898t" event={"ID":"67b1669f-4753-4b71-bf6f-3b1972f4f33d","Type":"ContainerDied","Data":"96ce5cd946093f2830211550351d99eb448d2963ec3ca80cacfe6935eb94664f"} Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.450686 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4898t" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.457994 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-btzrz" podStartSLOduration=2.457972637 podStartE2EDuration="2.457972637s" podCreationTimestamp="2026-01-26 00:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:14:38.454864497 +0000 UTC m=+356.363783856" watchObservedRunningTime="2026-01-26 00:14:38.457972637 +0000 UTC m=+356.366891986" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.463388 5124 scope.go:117] "RemoveContainer" containerID="6358f2631a13523f6a6804dc25a3f3787b165a85900798cc2210a947185a7a1d" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.484800 5124 scope.go:117] "RemoveContainer" containerID="e8ec1b7c1a9eb89bda875136238c0bda2e7a9f0fc56c0f42e2970b83c67ade57" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.496031 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shkmx"] Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.500306 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-shkmx"] Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.516453 5124 scope.go:117] "RemoveContainer" containerID="d636d8c930a6ef7a4d8bca6d30375e240339be66dd74a2341d580b7a669d96e8" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.532811 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4898t"] Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.540643 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4898t"] Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.546778 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5hwt4"] Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.549720 5124 scope.go:117] "RemoveContainer" containerID="585ad95565c25b69404f055f5952485511d97b01057417249e6e093bd69de12b" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.554551 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5hwt4"] Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.560698 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7m58f"] Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.566146 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7m58f"] Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.571776 5124 scope.go:117] "RemoveContainer" containerID="7b8ddddc66633ed3855fa351c98bf8ace80162b9365fda3ddc9af06f1e2fcf04" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.584851 5124 scope.go:117] "RemoveContainer" containerID="a544e3ba1690c2df00e3fec1ddda712f20e83b51cbd2413032be903a9db9297b" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.603178 5124 scope.go:117] "RemoveContainer" containerID="19abfeb851e1ad15dae47c652f6d05276eef1067a9497556f6d532afe731a544" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.615205 5124 scope.go:117] "RemoveContainer" containerID="0524988e08b7745561df4411beff1a274b89c41c7774c5ca9de4a2a607d5bdda" Jan 26 00:14:38 crc kubenswrapper[5124]: I0126 00:14:38.631240 5124 scope.go:117] "RemoveContainer" containerID="257ee670e3b3eca172f20d10f08eb87301097803b111cc82d59a96773c86c0ba" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.101916 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8cbbm"] Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102707 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="beb215dd-478e-4b23-b77c-5e741e026932" containerName="extract-content" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102720 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb215dd-478e-4b23-b77c-5e741e026932" containerName="extract-content" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102728 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ec6118e-bf44-44b1-8098-637ebd0083f7" containerName="extract-utilities" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102733 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec6118e-bf44-44b1-8098-637ebd0083f7" containerName="extract-utilities" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102743 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67b1669f-4753-4b71-bf6f-3b1972f4f33d" containerName="extract-utilities" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102748 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b1669f-4753-4b71-bf6f-3b1972f4f33d" containerName="extract-utilities" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102757 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67b1669f-4753-4b71-bf6f-3b1972f4f33d" containerName="registry-server" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102763 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b1669f-4753-4b71-bf6f-3b1972f4f33d" containerName="registry-server" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102769 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="beb215dd-478e-4b23-b77c-5e741e026932" containerName="extract-utilities" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102774 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb215dd-478e-4b23-b77c-5e741e026932" containerName="extract-utilities" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102784 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93d4050c-d7fd-40b6-bd58-133f961c4077" containerName="extract-content" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102789 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d4050c-d7fd-40b6-bd58-133f961c4077" containerName="extract-content" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102802 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67b1669f-4753-4b71-bf6f-3b1972f4f33d" containerName="extract-content" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102807 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b1669f-4753-4b71-bf6f-3b1972f4f33d" containerName="extract-content" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102814 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93d4050c-d7fd-40b6-bd58-133f961c4077" containerName="registry-server" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102819 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d4050c-d7fd-40b6-bd58-133f961c4077" containerName="registry-server" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102825 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="beb215dd-478e-4b23-b77c-5e741e026932" containerName="registry-server" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102830 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb215dd-478e-4b23-b77c-5e741e026932" containerName="registry-server" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102838 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93d4050c-d7fd-40b6-bd58-133f961c4077" containerName="extract-utilities" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102844 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d4050c-d7fd-40b6-bd58-133f961c4077" containerName="extract-utilities" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102852 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="973d580d-7e62-419e-be96-115733ca98bf" containerName="marketplace-operator" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102857 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="973d580d-7e62-419e-be96-115733ca98bf" containerName="marketplace-operator" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102865 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="973d580d-7e62-419e-be96-115733ca98bf" containerName="marketplace-operator" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102870 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="973d580d-7e62-419e-be96-115733ca98bf" containerName="marketplace-operator" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102878 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ec6118e-bf44-44b1-8098-637ebd0083f7" containerName="extract-content" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102883 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec6118e-bf44-44b1-8098-637ebd0083f7" containerName="extract-content" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102890 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ec6118e-bf44-44b1-8098-637ebd0083f7" containerName="registry-server" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102895 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec6118e-bf44-44b1-8098-637ebd0083f7" containerName="registry-server" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102969 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ec6118e-bf44-44b1-8098-637ebd0083f7" containerName="registry-server" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102978 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="973d580d-7e62-419e-be96-115733ca98bf" containerName="marketplace-operator" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102985 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="67b1669f-4753-4b71-bf6f-3b1972f4f33d" containerName="registry-server" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.102991 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="93d4050c-d7fd-40b6-bd58-133f961c4077" containerName="registry-server" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.103000 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="beb215dd-478e-4b23-b77c-5e741e026932" containerName="registry-server" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.103171 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="973d580d-7e62-419e-be96-115733ca98bf" containerName="marketplace-operator" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.128895 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cbbm"] Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.129080 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.138513 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.295950 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9clqw"] Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.302006 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.305728 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.306393 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9990d252-13eb-476d-a56a-6f40fad4a3a3-utilities\") pod \"redhat-marketplace-8cbbm\" (UID: \"9990d252-13eb-476d-a56a-6f40fad4a3a3\") " pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.306435 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9990d252-13eb-476d-a56a-6f40fad4a3a3-catalog-content\") pod \"redhat-marketplace-8cbbm\" (UID: \"9990d252-13eb-476d-a56a-6f40fad4a3a3\") " pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.306483 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92b5p\" (UniqueName: \"kubernetes.io/projected/9990d252-13eb-476d-a56a-6f40fad4a3a3-kube-api-access-92b5p\") pod \"redhat-marketplace-8cbbm\" (UID: \"9990d252-13eb-476d-a56a-6f40fad4a3a3\") " pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.306575 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9clqw"] Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.407893 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9990d252-13eb-476d-a56a-6f40fad4a3a3-utilities\") pod \"redhat-marketplace-8cbbm\" (UID: \"9990d252-13eb-476d-a56a-6f40fad4a3a3\") " pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.407938 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b3ebb2-7731-4d34-b50d-a4ded959d5d4-utilities\") pod \"certified-operators-9clqw\" (UID: \"48b3ebb2-7731-4d34-b50d-a4ded959d5d4\") " pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.407975 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9990d252-13eb-476d-a56a-6f40fad4a3a3-catalog-content\") pod \"redhat-marketplace-8cbbm\" (UID: \"9990d252-13eb-476d-a56a-6f40fad4a3a3\") " pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.408014 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b3ebb2-7731-4d34-b50d-a4ded959d5d4-catalog-content\") pod \"certified-operators-9clqw\" (UID: \"48b3ebb2-7731-4d34-b50d-a4ded959d5d4\") " pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.408031 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttjls\" (UniqueName: \"kubernetes.io/projected/48b3ebb2-7731-4d34-b50d-a4ded959d5d4-kube-api-access-ttjls\") pod \"certified-operators-9clqw\" (UID: \"48b3ebb2-7731-4d34-b50d-a4ded959d5d4\") " pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.408167 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-92b5p\" (UniqueName: \"kubernetes.io/projected/9990d252-13eb-476d-a56a-6f40fad4a3a3-kube-api-access-92b5p\") pod \"redhat-marketplace-8cbbm\" (UID: \"9990d252-13eb-476d-a56a-6f40fad4a3a3\") " pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.408511 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9990d252-13eb-476d-a56a-6f40fad4a3a3-catalog-content\") pod \"redhat-marketplace-8cbbm\" (UID: \"9990d252-13eb-476d-a56a-6f40fad4a3a3\") " pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.409435 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9990d252-13eb-476d-a56a-6f40fad4a3a3-utilities\") pod \"redhat-marketplace-8cbbm\" (UID: \"9990d252-13eb-476d-a56a-6f40fad4a3a3\") " pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.436484 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-92b5p\" (UniqueName: \"kubernetes.io/projected/9990d252-13eb-476d-a56a-6f40fad4a3a3-kube-api-access-92b5p\") pod \"redhat-marketplace-8cbbm\" (UID: \"9990d252-13eb-476d-a56a-6f40fad4a3a3\") " pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.443956 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.513938 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b3ebb2-7731-4d34-b50d-a4ded959d5d4-utilities\") pod \"certified-operators-9clqw\" (UID: \"48b3ebb2-7731-4d34-b50d-a4ded959d5d4\") " pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.514044 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b3ebb2-7731-4d34-b50d-a4ded959d5d4-catalog-content\") pod \"certified-operators-9clqw\" (UID: \"48b3ebb2-7731-4d34-b50d-a4ded959d5d4\") " pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.514065 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ttjls\" (UniqueName: \"kubernetes.io/projected/48b3ebb2-7731-4d34-b50d-a4ded959d5d4-kube-api-access-ttjls\") pod \"certified-operators-9clqw\" (UID: \"48b3ebb2-7731-4d34-b50d-a4ded959d5d4\") " pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.515121 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48b3ebb2-7731-4d34-b50d-a4ded959d5d4-utilities\") pod \"certified-operators-9clqw\" (UID: \"48b3ebb2-7731-4d34-b50d-a4ded959d5d4\") " pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.515148 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48b3ebb2-7731-4d34-b50d-a4ded959d5d4-catalog-content\") pod \"certified-operators-9clqw\" (UID: \"48b3ebb2-7731-4d34-b50d-a4ded959d5d4\") " pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.531410 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttjls\" (UniqueName: \"kubernetes.io/projected/48b3ebb2-7731-4d34-b50d-a4ded959d5d4-kube-api-access-ttjls\") pod \"certified-operators-9clqw\" (UID: \"48b3ebb2-7731-4d34-b50d-a4ded959d5d4\") " pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.614821 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.831477 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cbbm"] Jan 26 00:14:39 crc kubenswrapper[5124]: W0126 00:14:39.839889 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9990d252_13eb_476d_a56a_6f40fad4a3a3.slice/crio-4802644de59bed6c8d093039a38f958b68d5e1cc5fe7f848cd413cdd68b9e6b1 WatchSource:0}: Error finding container 4802644de59bed6c8d093039a38f958b68d5e1cc5fe7f848cd413cdd68b9e6b1: Status 404 returned error can't find the container with id 4802644de59bed6c8d093039a38f958b68d5e1cc5fe7f848cd413cdd68b9e6b1 Jan 26 00:14:39 crc kubenswrapper[5124]: I0126 00:14:39.997094 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9clqw"] Jan 26 00:14:40 crc kubenswrapper[5124]: W0126 00:14:40.005507 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48b3ebb2_7731_4d34_b50d_a4ded959d5d4.slice/crio-586a828f95dc99438f413be39d42b33aa32b12f5eb8bb3bdf766bca7296aa2f3 WatchSource:0}: Error finding container 586a828f95dc99438f413be39d42b33aa32b12f5eb8bb3bdf766bca7296aa2f3: Status 404 returned error can't find the container with id 586a828f95dc99438f413be39d42b33aa32b12f5eb8bb3bdf766bca7296aa2f3 Jan 26 00:14:40 crc kubenswrapper[5124]: I0126 00:14:40.372957 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ec6118e-bf44-44b1-8098-637ebd0083f7" path="/var/lib/kubelet/pods/5ec6118e-bf44-44b1-8098-637ebd0083f7/volumes" Jan 26 00:14:40 crc kubenswrapper[5124]: I0126 00:14:40.374535 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67b1669f-4753-4b71-bf6f-3b1972f4f33d" path="/var/lib/kubelet/pods/67b1669f-4753-4b71-bf6f-3b1972f4f33d/volumes" Jan 26 00:14:40 crc kubenswrapper[5124]: I0126 00:14:40.375439 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973d580d-7e62-419e-be96-115733ca98bf" path="/var/lib/kubelet/pods/973d580d-7e62-419e-be96-115733ca98bf/volumes" Jan 26 00:14:40 crc kubenswrapper[5124]: I0126 00:14:40.376737 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beb215dd-478e-4b23-b77c-5e741e026932" path="/var/lib/kubelet/pods/beb215dd-478e-4b23-b77c-5e741e026932/volumes" Jan 26 00:14:40 crc kubenswrapper[5124]: I0126 00:14:40.468307 5124 generic.go:358] "Generic (PLEG): container finished" podID="48b3ebb2-7731-4d34-b50d-a4ded959d5d4" containerID="decb3b0578697a611c58ebb12a9c864cca6e650f67e2f496c7ce45c087be3c02" exitCode=0 Jan 26 00:14:40 crc kubenswrapper[5124]: I0126 00:14:40.468385 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9clqw" event={"ID":"48b3ebb2-7731-4d34-b50d-a4ded959d5d4","Type":"ContainerDied","Data":"decb3b0578697a611c58ebb12a9c864cca6e650f67e2f496c7ce45c087be3c02"} Jan 26 00:14:40 crc kubenswrapper[5124]: I0126 00:14:40.468456 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9clqw" event={"ID":"48b3ebb2-7731-4d34-b50d-a4ded959d5d4","Type":"ContainerStarted","Data":"586a828f95dc99438f413be39d42b33aa32b12f5eb8bb3bdf766bca7296aa2f3"} Jan 26 00:14:40 crc kubenswrapper[5124]: I0126 00:14:40.470956 5124 generic.go:358] "Generic (PLEG): container finished" podID="9990d252-13eb-476d-a56a-6f40fad4a3a3" containerID="242ead44f4179a1a918ed390417c22a41292d646256eaa2e8f62ed1735bb8f1d" exitCode=0 Jan 26 00:14:40 crc kubenswrapper[5124]: I0126 00:14:40.472235 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cbbm" event={"ID":"9990d252-13eb-476d-a56a-6f40fad4a3a3","Type":"ContainerDied","Data":"242ead44f4179a1a918ed390417c22a41292d646256eaa2e8f62ed1735bb8f1d"} Jan 26 00:14:40 crc kubenswrapper[5124]: I0126 00:14:40.472253 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cbbm" event={"ID":"9990d252-13eb-476d-a56a-6f40fad4a3a3","Type":"ContainerStarted","Data":"4802644de59bed6c8d093039a38f958b68d5e1cc5fe7f848cd413cdd68b9e6b1"} Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.479109 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9clqw" event={"ID":"48b3ebb2-7731-4d34-b50d-a4ded959d5d4","Type":"ContainerStarted","Data":"43995c06143c9d6d85635c27d6eb4766fbf084346134085f2d0761fc83921134"} Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.482184 5124 generic.go:358] "Generic (PLEG): container finished" podID="9990d252-13eb-476d-a56a-6f40fad4a3a3" containerID="99be3caa5ebf3f1090d443f1d0b283a815f74904e915d1431dcfe7679029ddea" exitCode=0 Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.482236 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cbbm" event={"ID":"9990d252-13eb-476d-a56a-6f40fad4a3a3","Type":"ContainerDied","Data":"99be3caa5ebf3f1090d443f1d0b283a815f74904e915d1431dcfe7679029ddea"} Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.499653 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2lxw7"] Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.509909 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.512059 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.516294 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2lxw7"] Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.545617 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c103ac-5665-4af2-894d-ae43b0926b3f-utilities\") pod \"redhat-operators-2lxw7\" (UID: \"b3c103ac-5665-4af2-894d-ae43b0926b3f\") " pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.546199 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c103ac-5665-4af2-894d-ae43b0926b3f-catalog-content\") pod \"redhat-operators-2lxw7\" (UID: \"b3c103ac-5665-4af2-894d-ae43b0926b3f\") " pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.546415 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpg82\" (UniqueName: \"kubernetes.io/projected/b3c103ac-5665-4af2-894d-ae43b0926b3f-kube-api-access-qpg82\") pod \"redhat-operators-2lxw7\" (UID: \"b3c103ac-5665-4af2-894d-ae43b0926b3f\") " pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.648355 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c103ac-5665-4af2-894d-ae43b0926b3f-catalog-content\") pod \"redhat-operators-2lxw7\" (UID: \"b3c103ac-5665-4af2-894d-ae43b0926b3f\") " pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.648410 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qpg82\" (UniqueName: \"kubernetes.io/projected/b3c103ac-5665-4af2-894d-ae43b0926b3f-kube-api-access-qpg82\") pod \"redhat-operators-2lxw7\" (UID: \"b3c103ac-5665-4af2-894d-ae43b0926b3f\") " pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.648445 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c103ac-5665-4af2-894d-ae43b0926b3f-utilities\") pod \"redhat-operators-2lxw7\" (UID: \"b3c103ac-5665-4af2-894d-ae43b0926b3f\") " pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.648985 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c103ac-5665-4af2-894d-ae43b0926b3f-utilities\") pod \"redhat-operators-2lxw7\" (UID: \"b3c103ac-5665-4af2-894d-ae43b0926b3f\") " pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.649348 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c103ac-5665-4af2-894d-ae43b0926b3f-catalog-content\") pod \"redhat-operators-2lxw7\" (UID: \"b3c103ac-5665-4af2-894d-ae43b0926b3f\") " pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.672487 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpg82\" (UniqueName: \"kubernetes.io/projected/b3c103ac-5665-4af2-894d-ae43b0926b3f-kube-api-access-qpg82\") pod \"redhat-operators-2lxw7\" (UID: \"b3c103ac-5665-4af2-894d-ae43b0926b3f\") " pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.701207 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hqj2s"] Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.705280 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.707193 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqj2s"] Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.709272 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.749879 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4afc7c4-f4b6-43f0-895d-d8eea95e4e44-catalog-content\") pod \"community-operators-hqj2s\" (UID: \"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44\") " pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.749951 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44hfw\" (UniqueName: \"kubernetes.io/projected/e4afc7c4-f4b6-43f0-895d-d8eea95e4e44-kube-api-access-44hfw\") pod \"community-operators-hqj2s\" (UID: \"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44\") " pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.749985 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4afc7c4-f4b6-43f0-895d-d8eea95e4e44-utilities\") pod \"community-operators-hqj2s\" (UID: \"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44\") " pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.853816 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4afc7c4-f4b6-43f0-895d-d8eea95e4e44-catalog-content\") pod \"community-operators-hqj2s\" (UID: \"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44\") " pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.853897 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-44hfw\" (UniqueName: \"kubernetes.io/projected/e4afc7c4-f4b6-43f0-895d-d8eea95e4e44-kube-api-access-44hfw\") pod \"community-operators-hqj2s\" (UID: \"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44\") " pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.853925 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4afc7c4-f4b6-43f0-895d-d8eea95e4e44-utilities\") pod \"community-operators-hqj2s\" (UID: \"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44\") " pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.854606 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4afc7c4-f4b6-43f0-895d-d8eea95e4e44-utilities\") pod \"community-operators-hqj2s\" (UID: \"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44\") " pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.854572 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4afc7c4-f4b6-43f0-895d-d8eea95e4e44-catalog-content\") pod \"community-operators-hqj2s\" (UID: \"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44\") " pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.880453 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-44hfw\" (UniqueName: \"kubernetes.io/projected/e4afc7c4-f4b6-43f0-895d-d8eea95e4e44-kube-api-access-44hfw\") pod \"community-operators-hqj2s\" (UID: \"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44\") " pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.898132 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-nq8hx"] Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.908357 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.909402 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.912913 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-nq8hx"] Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.956287 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b1210ae-6725-44a6-a2fb-629ace05ef49-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.956352 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b1210ae-6725-44a6-a2fb-629ace05ef49-registry-tls\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.956375 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b1210ae-6725-44a6-a2fb-629ace05ef49-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.956413 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b1210ae-6725-44a6-a2fb-629ace05ef49-bound-sa-token\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.956451 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.956473 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b1210ae-6725-44a6-a2fb-629ace05ef49-trusted-ca\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.956494 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b1210ae-6725-44a6-a2fb-629ace05ef49-registry-certificates\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.956538 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnnsg\" (UniqueName: \"kubernetes.io/projected/4b1210ae-6725-44a6-a2fb-629ace05ef49-kube-api-access-hnnsg\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:41 crc kubenswrapper[5124]: I0126 00:14:41.984513 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.040091 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.057344 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b1210ae-6725-44a6-a2fb-629ace05ef49-registry-tls\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.057388 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b1210ae-6725-44a6-a2fb-629ace05ef49-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.057424 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b1210ae-6725-44a6-a2fb-629ace05ef49-bound-sa-token\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.057457 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b1210ae-6725-44a6-a2fb-629ace05ef49-trusted-ca\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.057478 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b1210ae-6725-44a6-a2fb-629ace05ef49-registry-certificates\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.057517 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hnnsg\" (UniqueName: \"kubernetes.io/projected/4b1210ae-6725-44a6-a2fb-629ace05ef49-kube-api-access-hnnsg\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.057542 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b1210ae-6725-44a6-a2fb-629ace05ef49-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.058275 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b1210ae-6725-44a6-a2fb-629ace05ef49-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.059641 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b1210ae-6725-44a6-a2fb-629ace05ef49-registry-certificates\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.061349 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b1210ae-6725-44a6-a2fb-629ace05ef49-trusted-ca\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.064140 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b1210ae-6725-44a6-a2fb-629ace05ef49-registry-tls\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.066021 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b1210ae-6725-44a6-a2fb-629ace05ef49-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.075215 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b1210ae-6725-44a6-a2fb-629ace05ef49-bound-sa-token\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.076200 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnnsg\" (UniqueName: \"kubernetes.io/projected/4b1210ae-6725-44a6-a2fb-629ace05ef49-kube-api-access-hnnsg\") pod \"image-registry-5d9d95bf5b-nq8hx\" (UID: \"4b1210ae-6725-44a6-a2fb-629ace05ef49\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.223789 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.318726 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2lxw7"] Jan 26 00:14:42 crc kubenswrapper[5124]: W0126 00:14:42.327328 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3c103ac_5665_4af2_894d_ae43b0926b3f.slice/crio-e58fe5cb3a037a2839f8a8cd217de49b8299bc4b71379a5c453fdffe5a87806d WatchSource:0}: Error finding container e58fe5cb3a037a2839f8a8cd217de49b8299bc4b71379a5c453fdffe5a87806d: Status 404 returned error can't find the container with id e58fe5cb3a037a2839f8a8cd217de49b8299bc4b71379a5c453fdffe5a87806d Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.440127 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqj2s"] Jan 26 00:14:42 crc kubenswrapper[5124]: W0126 00:14:42.449064 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4afc7c4_f4b6_43f0_895d_d8eea95e4e44.slice/crio-e3e2e44fc5da64c277024aa717136fe0ab5d2e7229892de284146baf832411e0 WatchSource:0}: Error finding container e3e2e44fc5da64c277024aa717136fe0ab5d2e7229892de284146baf832411e0: Status 404 returned error can't find the container with id e3e2e44fc5da64c277024aa717136fe0ab5d2e7229892de284146baf832411e0 Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.491218 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cbbm" event={"ID":"9990d252-13eb-476d-a56a-6f40fad4a3a3","Type":"ContainerStarted","Data":"52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9"} Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.492557 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqj2s" event={"ID":"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44","Type":"ContainerStarted","Data":"e3e2e44fc5da64c277024aa717136fe0ab5d2e7229892de284146baf832411e0"} Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.494915 5124 generic.go:358] "Generic (PLEG): container finished" podID="48b3ebb2-7731-4d34-b50d-a4ded959d5d4" containerID="43995c06143c9d6d85635c27d6eb4766fbf084346134085f2d0761fc83921134" exitCode=0 Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.495062 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9clqw" event={"ID":"48b3ebb2-7731-4d34-b50d-a4ded959d5d4","Type":"ContainerDied","Data":"43995c06143c9d6d85635c27d6eb4766fbf084346134085f2d0761fc83921134"} Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.497734 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lxw7" event={"ID":"b3c103ac-5665-4af2-894d-ae43b0926b3f","Type":"ContainerStarted","Data":"3fc6c19546fe471c294c0e1aaa164fccd1d680ee4ac6d47ee568c6bb1c6b79b6"} Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.497766 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lxw7" event={"ID":"b3c103ac-5665-4af2-894d-ae43b0926b3f","Type":"ContainerStarted","Data":"e58fe5cb3a037a2839f8a8cd217de49b8299bc4b71379a5c453fdffe5a87806d"} Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.509517 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8cbbm" podStartSLOduration=2.816163249 podStartE2EDuration="3.509499103s" podCreationTimestamp="2026-01-26 00:14:39 +0000 UTC" firstStartedPulling="2026-01-26 00:14:40.472106379 +0000 UTC m=+358.381025728" lastFinishedPulling="2026-01-26 00:14:41.165442233 +0000 UTC m=+359.074361582" observedRunningTime="2026-01-26 00:14:42.50731939 +0000 UTC m=+360.416238749" watchObservedRunningTime="2026-01-26 00:14:42.509499103 +0000 UTC m=+360.418418452" Jan 26 00:14:42 crc kubenswrapper[5124]: I0126 00:14:42.629845 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-nq8hx"] Jan 26 00:14:43 crc kubenswrapper[5124]: I0126 00:14:43.503759 5124 generic.go:358] "Generic (PLEG): container finished" podID="b3c103ac-5665-4af2-894d-ae43b0926b3f" containerID="3fc6c19546fe471c294c0e1aaa164fccd1d680ee4ac6d47ee568c6bb1c6b79b6" exitCode=0 Jan 26 00:14:43 crc kubenswrapper[5124]: I0126 00:14:43.503860 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lxw7" event={"ID":"b3c103ac-5665-4af2-894d-ae43b0926b3f","Type":"ContainerDied","Data":"3fc6c19546fe471c294c0e1aaa164fccd1d680ee4ac6d47ee568c6bb1c6b79b6"} Jan 26 00:14:43 crc kubenswrapper[5124]: I0126 00:14:43.506660 5124 generic.go:358] "Generic (PLEG): container finished" podID="e4afc7c4-f4b6-43f0-895d-d8eea95e4e44" containerID="2f5c4ac47a7b515d5ac28dd1aa9e8ff2ad33c7e1c5865e685b7271190c1c80b5" exitCode=0 Jan 26 00:14:43 crc kubenswrapper[5124]: I0126 00:14:43.506721 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqj2s" event={"ID":"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44","Type":"ContainerDied","Data":"2f5c4ac47a7b515d5ac28dd1aa9e8ff2ad33c7e1c5865e685b7271190c1c80b5"} Jan 26 00:14:43 crc kubenswrapper[5124]: I0126 00:14:43.508113 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" event={"ID":"4b1210ae-6725-44a6-a2fb-629ace05ef49","Type":"ContainerStarted","Data":"8ba5af0fad263059d5596cc044706a8fe7fcada57104eb12e887a1e0ed280ea1"} Jan 26 00:14:43 crc kubenswrapper[5124]: I0126 00:14:43.508138 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" event={"ID":"4b1210ae-6725-44a6-a2fb-629ace05ef49","Type":"ContainerStarted","Data":"0a49df341879b6dd5b3892afd1d2056dc6a8113a1631ff1a70032ca1fff95893"} Jan 26 00:14:43 crc kubenswrapper[5124]: I0126 00:14:43.508339 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:14:43 crc kubenswrapper[5124]: I0126 00:14:43.510766 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9clqw" event={"ID":"48b3ebb2-7731-4d34-b50d-a4ded959d5d4","Type":"ContainerStarted","Data":"28c40b407891d46f8328ec2683061241de3b69cfee8267da6b0e90b51dd5b6e1"} Jan 26 00:14:43 crc kubenswrapper[5124]: I0126 00:14:43.532346 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" podStartSLOduration=2.532329904 podStartE2EDuration="2.532329904s" podCreationTimestamp="2026-01-26 00:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:14:43.531105049 +0000 UTC m=+361.440024398" watchObservedRunningTime="2026-01-26 00:14:43.532329904 +0000 UTC m=+361.441249253" Jan 26 00:14:43 crc kubenswrapper[5124]: I0126 00:14:43.550460 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9clqw" podStartSLOduration=3.785121025 podStartE2EDuration="4.54953218s" podCreationTimestamp="2026-01-26 00:14:39 +0000 UTC" firstStartedPulling="2026-01-26 00:14:40.469153174 +0000 UTC m=+358.378072523" lastFinishedPulling="2026-01-26 00:14:41.233564329 +0000 UTC m=+359.142483678" observedRunningTime="2026-01-26 00:14:43.545423692 +0000 UTC m=+361.454343051" watchObservedRunningTime="2026-01-26 00:14:43.54953218 +0000 UTC m=+361.458451529" Jan 26 00:14:45 crc kubenswrapper[5124]: I0126 00:14:45.525122 5124 generic.go:358] "Generic (PLEG): container finished" podID="e4afc7c4-f4b6-43f0-895d-d8eea95e4e44" containerID="c5705bd6e5042ef4168ece620abb7752fd6b1f989560bf05bcb0eea016fec593" exitCode=0 Jan 26 00:14:45 crc kubenswrapper[5124]: I0126 00:14:45.525245 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqj2s" event={"ID":"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44","Type":"ContainerDied","Data":"c5705bd6e5042ef4168ece620abb7752fd6b1f989560bf05bcb0eea016fec593"} Jan 26 00:14:45 crc kubenswrapper[5124]: I0126 00:14:45.527950 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lxw7" event={"ID":"b3c103ac-5665-4af2-894d-ae43b0926b3f","Type":"ContainerStarted","Data":"dca4a6111614e94b1d068474d30a8a2d0cdbaf92608c4f9d21dcd10b2dba775f"} Jan 26 00:14:46 crc kubenswrapper[5124]: I0126 00:14:46.542185 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqj2s" event={"ID":"e4afc7c4-f4b6-43f0-895d-d8eea95e4e44","Type":"ContainerStarted","Data":"999a972b16d4622172ca6d4095793faaf41313348de23e5696b33bb3203a431a"} Jan 26 00:14:46 crc kubenswrapper[5124]: I0126 00:14:46.544416 5124 generic.go:358] "Generic (PLEG): container finished" podID="b3c103ac-5665-4af2-894d-ae43b0926b3f" containerID="dca4a6111614e94b1d068474d30a8a2d0cdbaf92608c4f9d21dcd10b2dba775f" exitCode=0 Jan 26 00:14:46 crc kubenswrapper[5124]: I0126 00:14:46.544463 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lxw7" event={"ID":"b3c103ac-5665-4af2-894d-ae43b0926b3f","Type":"ContainerDied","Data":"dca4a6111614e94b1d068474d30a8a2d0cdbaf92608c4f9d21dcd10b2dba775f"} Jan 26 00:14:46 crc kubenswrapper[5124]: I0126 00:14:46.564361 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hqj2s" podStartSLOduration=4.5673133870000004 podStartE2EDuration="5.564337684s" podCreationTimestamp="2026-01-26 00:14:41 +0000 UTC" firstStartedPulling="2026-01-26 00:14:43.507507207 +0000 UTC m=+361.416426556" lastFinishedPulling="2026-01-26 00:14:44.504531504 +0000 UTC m=+362.413450853" observedRunningTime="2026-01-26 00:14:46.562904913 +0000 UTC m=+364.471824262" watchObservedRunningTime="2026-01-26 00:14:46.564337684 +0000 UTC m=+364.473257033" Jan 26 00:14:49 crc kubenswrapper[5124]: I0126 00:14:49.445079 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:49 crc kubenswrapper[5124]: I0126 00:14:49.445864 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:49 crc kubenswrapper[5124]: I0126 00:14:49.484945 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:49 crc kubenswrapper[5124]: I0126 00:14:49.600735 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:14:49 crc kubenswrapper[5124]: I0126 00:14:49.615366 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:49 crc kubenswrapper[5124]: I0126 00:14:49.618794 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:49 crc kubenswrapper[5124]: I0126 00:14:49.656639 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:50 crc kubenswrapper[5124]: I0126 00:14:50.630039 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9clqw" Jan 26 00:14:52 crc kubenswrapper[5124]: I0126 00:14:52.041828 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:52 crc kubenswrapper[5124]: I0126 00:14:52.041894 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:52 crc kubenswrapper[5124]: I0126 00:14:52.078357 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:52 crc kubenswrapper[5124]: I0126 00:14:52.610132 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hqj2s" Jan 26 00:14:54 crc kubenswrapper[5124]: I0126 00:14:54.584398 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lxw7" event={"ID":"b3c103ac-5665-4af2-894d-ae43b0926b3f","Type":"ContainerStarted","Data":"23582fe5661eb8ed032f8f785a7628fd12d10af021e62deb4994e2faee25eedd"} Jan 26 00:14:54 crc kubenswrapper[5124]: I0126 00:14:54.604932 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2lxw7" podStartSLOduration=12.213077636 podStartE2EDuration="13.604911844s" podCreationTimestamp="2026-01-26 00:14:41 +0000 UTC" firstStartedPulling="2026-01-26 00:14:43.504704126 +0000 UTC m=+361.413623475" lastFinishedPulling="2026-01-26 00:14:44.896538334 +0000 UTC m=+362.805457683" observedRunningTime="2026-01-26 00:14:54.600083174 +0000 UTC m=+372.509002543" watchObservedRunningTime="2026-01-26 00:14:54.604911844 +0000 UTC m=+372.513831193" Jan 26 00:15:00 crc kubenswrapper[5124]: I0126 00:15:00.138882 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz"] Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.348717 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz"] Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.348898 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.351125 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.351150 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.417182 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg9wl\" (UniqueName: \"kubernetes.io/projected/338d192f-3411-4ecf-ac00-babc13e98707-kube-api-access-rg9wl\") pod \"collect-profiles-29489775-9gjhz\" (UID: \"338d192f-3411-4ecf-ac00-babc13e98707\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.417475 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/338d192f-3411-4ecf-ac00-babc13e98707-config-volume\") pod \"collect-profiles-29489775-9gjhz\" (UID: \"338d192f-3411-4ecf-ac00-babc13e98707\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.417708 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/338d192f-3411-4ecf-ac00-babc13e98707-secret-volume\") pod \"collect-profiles-29489775-9gjhz\" (UID: \"338d192f-3411-4ecf-ac00-babc13e98707\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.518976 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rg9wl\" (UniqueName: \"kubernetes.io/projected/338d192f-3411-4ecf-ac00-babc13e98707-kube-api-access-rg9wl\") pod \"collect-profiles-29489775-9gjhz\" (UID: \"338d192f-3411-4ecf-ac00-babc13e98707\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.519037 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/338d192f-3411-4ecf-ac00-babc13e98707-config-volume\") pod \"collect-profiles-29489775-9gjhz\" (UID: \"338d192f-3411-4ecf-ac00-babc13e98707\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.519505 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/338d192f-3411-4ecf-ac00-babc13e98707-secret-volume\") pod \"collect-profiles-29489775-9gjhz\" (UID: \"338d192f-3411-4ecf-ac00-babc13e98707\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.520547 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/338d192f-3411-4ecf-ac00-babc13e98707-config-volume\") pod \"collect-profiles-29489775-9gjhz\" (UID: \"338d192f-3411-4ecf-ac00-babc13e98707\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.525734 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/338d192f-3411-4ecf-ac00-babc13e98707-secret-volume\") pod \"collect-profiles-29489775-9gjhz\" (UID: \"338d192f-3411-4ecf-ac00-babc13e98707\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.537146 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg9wl\" (UniqueName: \"kubernetes.io/projected/338d192f-3411-4ecf-ac00-babc13e98707-kube-api-access-rg9wl\") pod \"collect-profiles-29489775-9gjhz\" (UID: \"338d192f-3411-4ecf-ac00-babc13e98707\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.664459 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.911135 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.911496 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:15:01 crc kubenswrapper[5124]: I0126 00:15:01.945833 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:15:02 crc kubenswrapper[5124]: I0126 00:15:02.056769 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz"] Jan 26 00:15:02 crc kubenswrapper[5124]: W0126 00:15:02.071950 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod338d192f_3411_4ecf_ac00_babc13e98707.slice/crio-a00030ac6fc24f2e859196d00c015fadb63205b0292d49f25f95cb1f50f5c285 WatchSource:0}: Error finding container a00030ac6fc24f2e859196d00c015fadb63205b0292d49f25f95cb1f50f5c285: Status 404 returned error can't find the container with id a00030ac6fc24f2e859196d00c015fadb63205b0292d49f25f95cb1f50f5c285 Jan 26 00:15:02 crc kubenswrapper[5124]: I0126 00:15:02.625309 5124 generic.go:358] "Generic (PLEG): container finished" podID="338d192f-3411-4ecf-ac00-babc13e98707" containerID="e01d31f76255ff733ff2a0907fbe4b6d4e838adf0cf00bb68fe0664a243d7c6d" exitCode=0 Jan 26 00:15:02 crc kubenswrapper[5124]: I0126 00:15:02.625407 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" event={"ID":"338d192f-3411-4ecf-ac00-babc13e98707","Type":"ContainerDied","Data":"e01d31f76255ff733ff2a0907fbe4b6d4e838adf0cf00bb68fe0664a243d7c6d"} Jan 26 00:15:02 crc kubenswrapper[5124]: I0126 00:15:02.625764 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" event={"ID":"338d192f-3411-4ecf-ac00-babc13e98707","Type":"ContainerStarted","Data":"a00030ac6fc24f2e859196d00c015fadb63205b0292d49f25f95cb1f50f5c285"} Jan 26 00:15:02 crc kubenswrapper[5124]: I0126 00:15:02.664980 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2lxw7" Jan 26 00:15:03 crc kubenswrapper[5124]: I0126 00:15:03.927974 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.058574 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg9wl\" (UniqueName: \"kubernetes.io/projected/338d192f-3411-4ecf-ac00-babc13e98707-kube-api-access-rg9wl\") pod \"338d192f-3411-4ecf-ac00-babc13e98707\" (UID: \"338d192f-3411-4ecf-ac00-babc13e98707\") " Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.058862 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/338d192f-3411-4ecf-ac00-babc13e98707-secret-volume\") pod \"338d192f-3411-4ecf-ac00-babc13e98707\" (UID: \"338d192f-3411-4ecf-ac00-babc13e98707\") " Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.058973 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/338d192f-3411-4ecf-ac00-babc13e98707-config-volume\") pod \"338d192f-3411-4ecf-ac00-babc13e98707\" (UID: \"338d192f-3411-4ecf-ac00-babc13e98707\") " Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.060867 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/338d192f-3411-4ecf-ac00-babc13e98707-config-volume" (OuterVolumeSpecName: "config-volume") pod "338d192f-3411-4ecf-ac00-babc13e98707" (UID: "338d192f-3411-4ecf-ac00-babc13e98707"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.065319 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/338d192f-3411-4ecf-ac00-babc13e98707-kube-api-access-rg9wl" (OuterVolumeSpecName: "kube-api-access-rg9wl") pod "338d192f-3411-4ecf-ac00-babc13e98707" (UID: "338d192f-3411-4ecf-ac00-babc13e98707"). InnerVolumeSpecName "kube-api-access-rg9wl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.065706 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338d192f-3411-4ecf-ac00-babc13e98707-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "338d192f-3411-4ecf-ac00-babc13e98707" (UID: "338d192f-3411-4ecf-ac00-babc13e98707"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.160947 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rg9wl\" (UniqueName: \"kubernetes.io/projected/338d192f-3411-4ecf-ac00-babc13e98707-kube-api-access-rg9wl\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.160983 5124 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/338d192f-3411-4ecf-ac00-babc13e98707-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.160993 5124 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/338d192f-3411-4ecf-ac00-babc13e98707-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.640223 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.640240 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-9gjhz" event={"ID":"338d192f-3411-4ecf-ac00-babc13e98707","Type":"ContainerDied","Data":"a00030ac6fc24f2e859196d00c015fadb63205b0292d49f25f95cb1f50f5c285"} Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.640286 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a00030ac6fc24f2e859196d00c015fadb63205b0292d49f25f95cb1f50f5c285" Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.682471 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-nq8hx" Jan 26 00:15:04 crc kubenswrapper[5124]: I0126 00:15:04.731215 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-25hx6"] Jan 26 00:15:29 crc kubenswrapper[5124]: I0126 00:15:29.773796 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" podUID="5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" containerName="registry" containerID="cri-o://b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b" gracePeriod=30 Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.158473 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.214437 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-trusted-ca\") pod \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.214562 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-installation-pull-secrets\") pod \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.214685 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-registry-certificates\") pod \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.214880 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.214986 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkh5d\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-kube-api-access-wkh5d\") pod \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.215010 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-bound-sa-token\") pod \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.215035 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-registry-tls\") pod \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.215104 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-ca-trust-extracted\") pod \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\" (UID: \"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf\") " Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.215404 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.216485 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.221002 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-kube-api-access-wkh5d" (OuterVolumeSpecName: "kube-api-access-wkh5d") pod "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf"). InnerVolumeSpecName "kube-api-access-wkh5d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.221544 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.221771 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.221990 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.225796 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.234553 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" (UID: "5ce48d95-5f74-4d15-8f19-94cfd81c3dcf"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.316562 5124 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.316640 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wkh5d\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-kube-api-access-wkh5d\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.316650 5124 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.316662 5124 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.316672 5124 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.316680 5124 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.316689 5124 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.788230 5124 generic.go:358] "Generic (PLEG): container finished" podID="5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" containerID="b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b" exitCode=0 Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.788286 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" event={"ID":"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf","Type":"ContainerDied","Data":"b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b"} Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.788352 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" event={"ID":"5ce48d95-5f74-4d15-8f19-94cfd81c3dcf","Type":"ContainerDied","Data":"b81aa5dd44d02238fab14ff86f412f84e62e671e33e2dac2d82ac0d9819fbb72"} Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.788373 5124 scope.go:117] "RemoveContainer" containerID="b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.788534 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-25hx6" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.812972 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-25hx6"] Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.817508 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-25hx6"] Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.819319 5124 scope.go:117] "RemoveContainer" containerID="b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b" Jan 26 00:15:30 crc kubenswrapper[5124]: E0126 00:15:30.819823 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b\": container with ID starting with b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b not found: ID does not exist" containerID="b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b" Jan 26 00:15:30 crc kubenswrapper[5124]: I0126 00:15:30.819888 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b"} err="failed to get container status \"b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b\": rpc error: code = NotFound desc = could not find container \"b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b\": container with ID starting with b4e96e87e1ab2b525c3619e0976d63bce8f15242efc1d83e505b7c980a6ba79b not found: ID does not exist" Jan 26 00:15:32 crc kubenswrapper[5124]: I0126 00:15:32.378897 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" path="/var/lib/kubelet/pods/5ce48d95-5f74-4d15-8f19-94cfd81c3dcf/volumes" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.136476 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489776-zlkf6"] Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.143256 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="338d192f-3411-4ecf-ac00-babc13e98707" containerName="collect-profiles" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.143277 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="338d192f-3411-4ecf-ac00-babc13e98707" containerName="collect-profiles" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.143295 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" containerName="registry" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.143301 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" containerName="registry" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.143430 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ce48d95-5f74-4d15-8f19-94cfd81c3dcf" containerName="registry" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.143450 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="338d192f-3411-4ecf-ac00-babc13e98707" containerName="collect-profiles" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.155212 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489776-zlkf6" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.155274 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489776-zlkf6"] Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.158981 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.159050 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.159634 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-26tfw\"" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.239724 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2ml\" (UniqueName: \"kubernetes.io/projected/3fe2d2b1-e495-4127-bda5-97d67b08dc73-kube-api-access-7l2ml\") pod \"auto-csr-approver-29489776-zlkf6\" (UID: \"3fe2d2b1-e495-4127-bda5-97d67b08dc73\") " pod="openshift-infra/auto-csr-approver-29489776-zlkf6" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.341821 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7l2ml\" (UniqueName: \"kubernetes.io/projected/3fe2d2b1-e495-4127-bda5-97d67b08dc73-kube-api-access-7l2ml\") pod \"auto-csr-approver-29489776-zlkf6\" (UID: \"3fe2d2b1-e495-4127-bda5-97d67b08dc73\") " pod="openshift-infra/auto-csr-approver-29489776-zlkf6" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.364316 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l2ml\" (UniqueName: \"kubernetes.io/projected/3fe2d2b1-e495-4127-bda5-97d67b08dc73-kube-api-access-7l2ml\") pod \"auto-csr-approver-29489776-zlkf6\" (UID: \"3fe2d2b1-e495-4127-bda5-97d67b08dc73\") " pod="openshift-infra/auto-csr-approver-29489776-zlkf6" Jan 26 00:16:00 crc kubenswrapper[5124]: I0126 00:16:00.484984 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489776-zlkf6" Jan 26 00:16:01 crc kubenswrapper[5124]: I0126 00:16:01.006515 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489776-zlkf6"] Jan 26 00:16:01 crc kubenswrapper[5124]: I0126 00:16:01.972648 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489776-zlkf6" event={"ID":"3fe2d2b1-e495-4127-bda5-97d67b08dc73","Type":"ContainerStarted","Data":"886febe172c2a93ef7f47e26533e633880feacd98c4a12bb10179ac6fc6a43e1"} Jan 26 00:16:03 crc kubenswrapper[5124]: I0126 00:16:03.987306 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489776-zlkf6" event={"ID":"3fe2d2b1-e495-4127-bda5-97d67b08dc73","Type":"ContainerStarted","Data":"438835d18323e2f1e3678c7785469844146c6987d7930abea00ea95eac4ca4d9"} Jan 26 00:16:04 crc kubenswrapper[5124]: I0126 00:16:04.002386 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489776-zlkf6" podStartSLOduration=1.41192138 podStartE2EDuration="4.002371883s" podCreationTimestamp="2026-01-26 00:16:00 +0000 UTC" firstStartedPulling="2026-01-26 00:16:01.017479077 +0000 UTC m=+438.926398426" lastFinishedPulling="2026-01-26 00:16:03.60792959 +0000 UTC m=+441.516848929" observedRunningTime="2026-01-26 00:16:03.998449557 +0000 UTC m=+441.907368906" watchObservedRunningTime="2026-01-26 00:16:04.002371883 +0000 UTC m=+441.911291222" Jan 26 00:16:04 crc kubenswrapper[5124]: I0126 00:16:04.173729 5124 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-sc567" Jan 26 00:16:04 crc kubenswrapper[5124]: I0126 00:16:04.198217 5124 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-sc567" Jan 26 00:16:04 crc kubenswrapper[5124]: I0126 00:16:04.995488 5124 generic.go:358] "Generic (PLEG): container finished" podID="3fe2d2b1-e495-4127-bda5-97d67b08dc73" containerID="438835d18323e2f1e3678c7785469844146c6987d7930abea00ea95eac4ca4d9" exitCode=0 Jan 26 00:16:04 crc kubenswrapper[5124]: I0126 00:16:04.995655 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489776-zlkf6" event={"ID":"3fe2d2b1-e495-4127-bda5-97d67b08dc73","Type":"ContainerDied","Data":"438835d18323e2f1e3678c7785469844146c6987d7930abea00ea95eac4ca4d9"} Jan 26 00:16:05 crc kubenswrapper[5124]: I0126 00:16:05.199238 5124 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-25 00:11:04 +0000 UTC" deadline="2026-02-20 18:27:11.574420691 +0000 UTC" Jan 26 00:16:05 crc kubenswrapper[5124]: I0126 00:16:05.199276 5124 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="618h11m6.37514764s" Jan 26 00:16:06 crc kubenswrapper[5124]: I0126 00:16:06.200145 5124 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-25 00:11:04 +0000 UTC" deadline="2026-02-20 12:14:12.8920654 +0000 UTC" Jan 26 00:16:06 crc kubenswrapper[5124]: I0126 00:16:06.200332 5124 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="611h58m6.691737593s" Jan 26 00:16:06 crc kubenswrapper[5124]: I0126 00:16:06.227152 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489776-zlkf6" Jan 26 00:16:06 crc kubenswrapper[5124]: I0126 00:16:06.235009 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l2ml\" (UniqueName: \"kubernetes.io/projected/3fe2d2b1-e495-4127-bda5-97d67b08dc73-kube-api-access-7l2ml\") pod \"3fe2d2b1-e495-4127-bda5-97d67b08dc73\" (UID: \"3fe2d2b1-e495-4127-bda5-97d67b08dc73\") " Jan 26 00:16:06 crc kubenswrapper[5124]: I0126 00:16:06.244467 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe2d2b1-e495-4127-bda5-97d67b08dc73-kube-api-access-7l2ml" (OuterVolumeSpecName: "kube-api-access-7l2ml") pod "3fe2d2b1-e495-4127-bda5-97d67b08dc73" (UID: "3fe2d2b1-e495-4127-bda5-97d67b08dc73"). InnerVolumeSpecName "kube-api-access-7l2ml". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:06 crc kubenswrapper[5124]: I0126 00:16:06.336891 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7l2ml\" (UniqueName: \"kubernetes.io/projected/3fe2d2b1-e495-4127-bda5-97d67b08dc73-kube-api-access-7l2ml\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:07 crc kubenswrapper[5124]: I0126 00:16:07.007484 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489776-zlkf6" Jan 26 00:16:07 crc kubenswrapper[5124]: I0126 00:16:07.007568 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489776-zlkf6" event={"ID":"3fe2d2b1-e495-4127-bda5-97d67b08dc73","Type":"ContainerDied","Data":"886febe172c2a93ef7f47e26533e633880feacd98c4a12bb10179ac6fc6a43e1"} Jan 26 00:16:07 crc kubenswrapper[5124]: I0126 00:16:07.008070 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="886febe172c2a93ef7f47e26533e633880feacd98c4a12bb10179ac6fc6a43e1" Jan 26 00:16:10 crc kubenswrapper[5124]: I0126 00:16:10.830051 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:16:10 crc kubenswrapper[5124]: I0126 00:16:10.830626 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:16:40 crc kubenswrapper[5124]: I0126 00:16:40.830143 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:16:40 crc kubenswrapper[5124]: I0126 00:16:40.831139 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:17:10 crc kubenswrapper[5124]: I0126 00:17:10.830398 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:17:10 crc kubenswrapper[5124]: I0126 00:17:10.831088 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:17:10 crc kubenswrapper[5124]: I0126 00:17:10.831144 5124 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:17:10 crc kubenswrapper[5124]: I0126 00:17:10.831918 5124 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d673794be664ea88f97aff7d6202b405eb46b2e426b73ffc27f0c5fba62377f"} pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:17:10 crc kubenswrapper[5124]: I0126 00:17:10.832016 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" containerID="cri-o://6d673794be664ea88f97aff7d6202b405eb46b2e426b73ffc27f0c5fba62377f" gracePeriod=600 Jan 26 00:17:11 crc kubenswrapper[5124]: I0126 00:17:11.369614 5124 generic.go:358] "Generic (PLEG): container finished" podID="95fa0656-150a-4d93-a324-77a1306d91f7" containerID="6d673794be664ea88f97aff7d6202b405eb46b2e426b73ffc27f0c5fba62377f" exitCode=0 Jan 26 00:17:11 crc kubenswrapper[5124]: I0126 00:17:11.369704 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerDied","Data":"6d673794be664ea88f97aff7d6202b405eb46b2e426b73ffc27f0c5fba62377f"} Jan 26 00:17:11 crc kubenswrapper[5124]: I0126 00:17:11.370026 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerStarted","Data":"bf0d2bc539a7272b2b55b13ae5225aa87fa06ada3cce31edaeaa612f3511ce10"} Jan 26 00:17:11 crc kubenswrapper[5124]: I0126 00:17:11.370046 5124 scope.go:117] "RemoveContainer" containerID="d83d6e9dbee8896d25299332774ac25503be88561fd1040886735c806d9b1d94" Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.144927 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489778-69swk"] Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.146043 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3fe2d2b1-e495-4127-bda5-97d67b08dc73" containerName="oc" Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.146058 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe2d2b1-e495-4127-bda5-97d67b08dc73" containerName="oc" Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.146142 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="3fe2d2b1-e495-4127-bda5-97d67b08dc73" containerName="oc" Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.149656 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-69swk" Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.149683 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-69swk"] Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.151417 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-26tfw\"" Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.152918 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.174352 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.197315 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6mg\" (UniqueName: \"kubernetes.io/projected/3ce4f34a-592b-4959-a248-ce0c338ddeea-kube-api-access-7s6mg\") pod \"auto-csr-approver-29489778-69swk\" (UID: \"3ce4f34a-592b-4959-a248-ce0c338ddeea\") " pod="openshift-infra/auto-csr-approver-29489778-69swk" Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.298291 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7s6mg\" (UniqueName: \"kubernetes.io/projected/3ce4f34a-592b-4959-a248-ce0c338ddeea-kube-api-access-7s6mg\") pod \"auto-csr-approver-29489778-69swk\" (UID: \"3ce4f34a-592b-4959-a248-ce0c338ddeea\") " pod="openshift-infra/auto-csr-approver-29489778-69swk" Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.315339 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s6mg\" (UniqueName: \"kubernetes.io/projected/3ce4f34a-592b-4959-a248-ce0c338ddeea-kube-api-access-7s6mg\") pod \"auto-csr-approver-29489778-69swk\" (UID: \"3ce4f34a-592b-4959-a248-ce0c338ddeea\") " pod="openshift-infra/auto-csr-approver-29489778-69swk" Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.492779 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-69swk" Jan 26 00:18:00 crc kubenswrapper[5124]: I0126 00:18:00.675523 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-69swk"] Jan 26 00:18:01 crc kubenswrapper[5124]: I0126 00:18:01.131949 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-69swk" event={"ID":"3ce4f34a-592b-4959-a248-ce0c338ddeea","Type":"ContainerStarted","Data":"ce1d1083523d0e3a453a33156ac830f280660c92aa390df89409e931db15173e"} Jan 26 00:18:02 crc kubenswrapper[5124]: I0126 00:18:02.137503 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-69swk" event={"ID":"3ce4f34a-592b-4959-a248-ce0c338ddeea","Type":"ContainerStarted","Data":"a18a4115f1d6f85f746ece3d78249c6901eaec4a0eadf93b91e59234138ac17a"} Jan 26 00:18:02 crc kubenswrapper[5124]: I0126 00:18:02.152004 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489778-69swk" podStartSLOduration=1.131103477 podStartE2EDuration="2.151990186s" podCreationTimestamp="2026-01-26 00:18:00 +0000 UTC" firstStartedPulling="2026-01-26 00:18:00.685992806 +0000 UTC m=+558.594912155" lastFinishedPulling="2026-01-26 00:18:01.706879515 +0000 UTC m=+559.615798864" observedRunningTime="2026-01-26 00:18:02.15033234 +0000 UTC m=+560.059251689" watchObservedRunningTime="2026-01-26 00:18:02.151990186 +0000 UTC m=+560.060909535" Jan 26 00:18:03 crc kubenswrapper[5124]: I0126 00:18:03.143230 5124 generic.go:358] "Generic (PLEG): container finished" podID="3ce4f34a-592b-4959-a248-ce0c338ddeea" containerID="a18a4115f1d6f85f746ece3d78249c6901eaec4a0eadf93b91e59234138ac17a" exitCode=0 Jan 26 00:18:03 crc kubenswrapper[5124]: I0126 00:18:03.143286 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-69swk" event={"ID":"3ce4f34a-592b-4959-a248-ce0c338ddeea","Type":"ContainerDied","Data":"a18a4115f1d6f85f746ece3d78249c6901eaec4a0eadf93b91e59234138ac17a"} Jan 26 00:18:04 crc kubenswrapper[5124]: I0126 00:18:04.357174 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-69swk" Jan 26 00:18:04 crc kubenswrapper[5124]: I0126 00:18:04.454838 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s6mg\" (UniqueName: \"kubernetes.io/projected/3ce4f34a-592b-4959-a248-ce0c338ddeea-kube-api-access-7s6mg\") pod \"3ce4f34a-592b-4959-a248-ce0c338ddeea\" (UID: \"3ce4f34a-592b-4959-a248-ce0c338ddeea\") " Jan 26 00:18:04 crc kubenswrapper[5124]: I0126 00:18:04.461628 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce4f34a-592b-4959-a248-ce0c338ddeea-kube-api-access-7s6mg" (OuterVolumeSpecName: "kube-api-access-7s6mg") pod "3ce4f34a-592b-4959-a248-ce0c338ddeea" (UID: "3ce4f34a-592b-4959-a248-ce0c338ddeea"). InnerVolumeSpecName "kube-api-access-7s6mg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:18:04 crc kubenswrapper[5124]: I0126 00:18:04.556427 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7s6mg\" (UniqueName: \"kubernetes.io/projected/3ce4f34a-592b-4959-a248-ce0c338ddeea-kube-api-access-7s6mg\") on node \"crc\" DevicePath \"\"" Jan 26 00:18:05 crc kubenswrapper[5124]: I0126 00:18:05.155090 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-69swk" Jan 26 00:18:05 crc kubenswrapper[5124]: I0126 00:18:05.155138 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-69swk" event={"ID":"3ce4f34a-592b-4959-a248-ce0c338ddeea","Type":"ContainerDied","Data":"ce1d1083523d0e3a453a33156ac830f280660c92aa390df89409e931db15173e"} Jan 26 00:18:05 crc kubenswrapper[5124]: I0126 00:18:05.155171 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce1d1083523d0e3a453a33156ac830f280660c92aa390df89409e931db15173e" Jan 26 00:18:42 crc kubenswrapper[5124]: I0126 00:18:42.614998 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:18:42 crc kubenswrapper[5124]: I0126 00:18:42.616083 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:19:40 crc kubenswrapper[5124]: I0126 00:19:40.830146 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:19:40 crc kubenswrapper[5124]: I0126 00:19:40.830720 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:19:49 crc kubenswrapper[5124]: I0126 00:19:49.615646 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk"] Jan 26 00:19:49 crc kubenswrapper[5124]: I0126 00:19:49.616623 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" podUID="8660dad9-43c8-4c00-872a-e00a6baab0f7" containerName="kube-rbac-proxy" containerID="cri-o://30d8c5e11238102663950d2ebd33f9bc42936ba7b859ad8cbb88cd6f37520d8b" gracePeriod=30 Jan 26 00:19:49 crc kubenswrapper[5124]: I0126 00:19:49.616997 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" podUID="8660dad9-43c8-4c00-872a-e00a6baab0f7" containerName="ovnkube-cluster-manager" containerID="cri-o://b6f2454f5333ab911eebfd64bf0a3fabf18ab1bbf4c865a0ff147603146a0da7" gracePeriod=30 Jan 26 00:19:49 crc kubenswrapper[5124]: I0126 00:19:49.851142 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sdh5t"] Jan 26 00:19:49 crc kubenswrapper[5124]: I0126 00:19:49.851668 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="ovn-controller" containerID="cri-o://0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd" gracePeriod=30 Jan 26 00:19:49 crc kubenswrapper[5124]: I0126 00:19:49.851726 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="nbdb" containerID="cri-o://b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b" gracePeriod=30 Jan 26 00:19:49 crc kubenswrapper[5124]: I0126 00:19:49.851754 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85" gracePeriod=30 Jan 26 00:19:49 crc kubenswrapper[5124]: I0126 00:19:49.851838 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="northd" containerID="cri-o://7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb" gracePeriod=30 Jan 26 00:19:49 crc kubenswrapper[5124]: I0126 00:19:49.851845 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="sbdb" containerID="cri-o://ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35" gracePeriod=30 Jan 26 00:19:49 crc kubenswrapper[5124]: I0126 00:19:49.851839 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="kube-rbac-proxy-node" containerID="cri-o://0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98" gracePeriod=30 Jan 26 00:19:49 crc kubenswrapper[5124]: I0126 00:19:49.851940 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="ovn-acl-logging" containerID="cri-o://a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33" gracePeriod=30 Jan 26 00:19:49 crc kubenswrapper[5124]: I0126 00:19:49.877836 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="ovnkube-controller" containerID="cri-o://b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768" gracePeriod=30 Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.312544 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.337197 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8660dad9-43c8-4c00-872a-e00a6baab0f7-ovn-control-plane-metrics-cert\") pod \"8660dad9-43c8-4c00-872a-e00a6baab0f7\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.337282 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx9l8\" (UniqueName: \"kubernetes.io/projected/8660dad9-43c8-4c00-872a-e00a6baab0f7-kube-api-access-lx9l8\") pod \"8660dad9-43c8-4c00-872a-e00a6baab0f7\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.337336 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8660dad9-43c8-4c00-872a-e00a6baab0f7-env-overrides\") pod \"8660dad9-43c8-4c00-872a-e00a6baab0f7\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.337425 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8660dad9-43c8-4c00-872a-e00a6baab0f7-ovnkube-config\") pod \"8660dad9-43c8-4c00-872a-e00a6baab0f7\" (UID: \"8660dad9-43c8-4c00-872a-e00a6baab0f7\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.338559 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8660dad9-43c8-4c00-872a-e00a6baab0f7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8660dad9-43c8-4c00-872a-e00a6baab0f7" (UID: "8660dad9-43c8-4c00-872a-e00a6baab0f7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.339180 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8660dad9-43c8-4c00-872a-e00a6baab0f7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8660dad9-43c8-4c00-872a-e00a6baab0f7" (UID: "8660dad9-43c8-4c00-872a-e00a6baab0f7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.344143 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j"] Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.345081 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ce4f34a-592b-4959-a248-ce0c338ddeea" containerName="oc" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.345103 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce4f34a-592b-4959-a248-ce0c338ddeea" containerName="oc" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.345122 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8660dad9-43c8-4c00-872a-e00a6baab0f7" containerName="ovnkube-cluster-manager" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.345130 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="8660dad9-43c8-4c00-872a-e00a6baab0f7" containerName="ovnkube-cluster-manager" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.345144 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8660dad9-43c8-4c00-872a-e00a6baab0f7" containerName="kube-rbac-proxy" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.345151 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="8660dad9-43c8-4c00-872a-e00a6baab0f7" containerName="kube-rbac-proxy" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.345279 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="8660dad9-43c8-4c00-872a-e00a6baab0f7" containerName="ovnkube-cluster-manager" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.345343 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="3ce4f34a-592b-4959-a248-ce0c338ddeea" containerName="oc" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.345357 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="8660dad9-43c8-4c00-872a-e00a6baab0f7" containerName="kube-rbac-proxy" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.345617 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8660dad9-43c8-4c00-872a-e00a6baab0f7-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "8660dad9-43c8-4c00-872a-e00a6baab0f7" (UID: "8660dad9-43c8-4c00-872a-e00a6baab0f7"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.346792 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8660dad9-43c8-4c00-872a-e00a6baab0f7-kube-api-access-lx9l8" (OuterVolumeSpecName: "kube-api-access-lx9l8") pod "8660dad9-43c8-4c00-872a-e00a6baab0f7" (UID: "8660dad9-43c8-4c00-872a-e00a6baab0f7"). InnerVolumeSpecName "kube-api-access-lx9l8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.404790 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.439095 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec31e507-1a30-4028-b078-0686d3cedc4e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-wt48j\" (UID: \"ec31e507-1a30-4028-b078-0686d3cedc4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.439183 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5krn7\" (UniqueName: \"kubernetes.io/projected/ec31e507-1a30-4028-b078-0686d3cedc4e-kube-api-access-5krn7\") pod \"ovnkube-control-plane-97c9b6c48-wt48j\" (UID: \"ec31e507-1a30-4028-b078-0686d3cedc4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.439378 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec31e507-1a30-4028-b078-0686d3cedc4e-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-wt48j\" (UID: \"ec31e507-1a30-4028-b078-0686d3cedc4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.439621 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec31e507-1a30-4028-b078-0686d3cedc4e-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-wt48j\" (UID: \"ec31e507-1a30-4028-b078-0686d3cedc4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.439750 5124 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8660dad9-43c8-4c00-872a-e00a6baab0f7-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.439768 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lx9l8\" (UniqueName: \"kubernetes.io/projected/8660dad9-43c8-4c00-872a-e00a6baab0f7-kube-api-access-lx9l8\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.439784 5124 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8660dad9-43c8-4c00-872a-e00a6baab0f7-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.439799 5124 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8660dad9-43c8-4c00-872a-e00a6baab0f7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.541113 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec31e507-1a30-4028-b078-0686d3cedc4e-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-wt48j\" (UID: \"ec31e507-1a30-4028-b078-0686d3cedc4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.541803 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec31e507-1a30-4028-b078-0686d3cedc4e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-wt48j\" (UID: \"ec31e507-1a30-4028-b078-0686d3cedc4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.541886 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5krn7\" (UniqueName: \"kubernetes.io/projected/ec31e507-1a30-4028-b078-0686d3cedc4e-kube-api-access-5krn7\") pod \"ovnkube-control-plane-97c9b6c48-wt48j\" (UID: \"ec31e507-1a30-4028-b078-0686d3cedc4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.541930 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec31e507-1a30-4028-b078-0686d3cedc4e-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-wt48j\" (UID: \"ec31e507-1a30-4028-b078-0686d3cedc4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.542671 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec31e507-1a30-4028-b078-0686d3cedc4e-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-wt48j\" (UID: \"ec31e507-1a30-4028-b078-0686d3cedc4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.542916 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec31e507-1a30-4028-b078-0686d3cedc4e-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-wt48j\" (UID: \"ec31e507-1a30-4028-b078-0686d3cedc4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.546517 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec31e507-1a30-4028-b078-0686d3cedc4e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-wt48j\" (UID: \"ec31e507-1a30-4028-b078-0686d3cedc4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.561723 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5krn7\" (UniqueName: \"kubernetes.io/projected/ec31e507-1a30-4028-b078-0686d3cedc4e-kube-api-access-5krn7\") pod \"ovnkube-control-plane-97c9b6c48-wt48j\" (UID: \"ec31e507-1a30-4028-b078-0686d3cedc4e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.621967 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdh5t_d13181a0-d54a-460b-bbc7-4948fb1a4eaf/ovn-acl-logging/0.log" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.623945 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdh5t_d13181a0-d54a-460b-bbc7-4948fb1a4eaf/ovn-controller/0.log" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.624447 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.683928 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2zwq5"] Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684444 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="ovnkube-controller" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684460 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="ovnkube-controller" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684470 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="sbdb" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684476 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="sbdb" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684483 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684490 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684503 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="ovn-controller" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684509 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="ovn-controller" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684519 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="kubecfg-setup" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684524 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="kubecfg-setup" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684535 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="ovn-acl-logging" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684540 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="ovn-acl-logging" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684553 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="northd" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684559 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="northd" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684566 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="kube-rbac-proxy-node" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684572 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="kube-rbac-proxy-node" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684577 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="nbdb" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684598 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="nbdb" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684702 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="ovnkube-controller" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684714 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="ovn-acl-logging" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684723 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="northd" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684735 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684743 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="ovn-controller" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684752 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="sbdb" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684759 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="nbdb" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.684766 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerName="kube-rbac-proxy-node" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.728801 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.730717 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746087 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-ovn\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746134 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-log-socket\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746157 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-var-lib-openvswitch\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746187 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovn-node-metrics-cert\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746246 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-slash\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746283 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sphjf\" (UniqueName: \"kubernetes.io/projected/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-kube-api-access-sphjf\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746306 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-node-log\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746339 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-systemd\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746367 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-systemd-units\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746415 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-env-overrides\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746436 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-etc-openvswitch\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746468 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-cni-bin\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746494 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-cni-netd\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746535 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovnkube-script-lib\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746560 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-run-netns\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746619 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-run-ovn-kubernetes\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746659 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovnkube-config\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746715 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-kubelet\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746756 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-openvswitch\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.746786 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\" (UID: \"d13181a0-d54a-460b-bbc7-4948fb1a4eaf\") " Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.747100 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.747144 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.747169 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-log-socket" (OuterVolumeSpecName: "log-socket") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.747192 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.747547 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-slash" (OuterVolumeSpecName: "host-slash") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.747852 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.747918 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.747925 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-node-log" (OuterVolumeSpecName: "node-log") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.747944 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.748418 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.748502 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.748578 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.748575 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.748638 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.748653 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.748705 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.749344 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.753134 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-kube-api-access-sphjf" (OuterVolumeSpecName: "kube-api-access-sphjf") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "kube-api-access-sphjf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.753261 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.762237 5124 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.782929 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdh5t_d13181a0-d54a-460b-bbc7-4948fb1a4eaf/ovn-acl-logging/0.log" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.784295 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sdh5t_d13181a0-d54a-460b-bbc7-4948fb1a4eaf/ovn-controller/0.log" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786236 5124 generic.go:358] "Generic (PLEG): container finished" podID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerID="b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768" exitCode=0 Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786266 5124 generic.go:358] "Generic (PLEG): container finished" podID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerID="ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35" exitCode=0 Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786275 5124 generic.go:358] "Generic (PLEG): container finished" podID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerID="b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b" exitCode=0 Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786283 5124 generic.go:358] "Generic (PLEG): container finished" podID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerID="7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb" exitCode=0 Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786292 5124 generic.go:358] "Generic (PLEG): container finished" podID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerID="5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85" exitCode=0 Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786300 5124 generic.go:358] "Generic (PLEG): container finished" podID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerID="0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98" exitCode=0 Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786308 5124 generic.go:358] "Generic (PLEG): container finished" podID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerID="a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33" exitCode=143 Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786317 5124 generic.go:358] "Generic (PLEG): container finished" podID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" containerID="0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd" exitCode=143 Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786362 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerDied","Data":"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786400 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786461 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerDied","Data":"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786481 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerDied","Data":"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786499 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerDied","Data":"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786521 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerDied","Data":"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786537 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerDied","Data":"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786556 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786575 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786604 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786617 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerDied","Data":"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786643 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786653 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786661 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786670 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786680 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786687 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786616 5124 scope.go:117] "RemoveContainer" containerID="b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786700 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786710 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786719 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786733 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerDied","Data":"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786747 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786758 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786766 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786774 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786782 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786790 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786797 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786804 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786812 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786825 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sdh5t" event={"ID":"d13181a0-d54a-460b-bbc7-4948fb1a4eaf","Type":"ContainerDied","Data":"a4b6f731862c59616a6d616cabe04b020f0309fb492113732c1e390cdc8eada8"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786836 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786849 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786857 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786866 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786872 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786881 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786888 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786896 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.786904 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.792574 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d13181a0-d54a-460b-bbc7-4948fb1a4eaf" (UID: "d13181a0-d54a-460b-bbc7-4948fb1a4eaf"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.797816 5124 generic.go:358] "Generic (PLEG): container finished" podID="8660dad9-43c8-4c00-872a-e00a6baab0f7" containerID="b6f2454f5333ab911eebfd64bf0a3fabf18ab1bbf4c865a0ff147603146a0da7" exitCode=0 Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.797885 5124 generic.go:358] "Generic (PLEG): container finished" podID="8660dad9-43c8-4c00-872a-e00a6baab0f7" containerID="30d8c5e11238102663950d2ebd33f9bc42936ba7b859ad8cbb88cd6f37520d8b" exitCode=0 Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.797907 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" event={"ID":"8660dad9-43c8-4c00-872a-e00a6baab0f7","Type":"ContainerDied","Data":"b6f2454f5333ab911eebfd64bf0a3fabf18ab1bbf4c865a0ff147603146a0da7"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.797960 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6f2454f5333ab911eebfd64bf0a3fabf18ab1bbf4c865a0ff147603146a0da7"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.797971 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30d8c5e11238102663950d2ebd33f9bc42936ba7b859ad8cbb88cd6f37520d8b"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.797984 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" event={"ID":"8660dad9-43c8-4c00-872a-e00a6baab0f7","Type":"ContainerDied","Data":"30d8c5e11238102663950d2ebd33f9bc42936ba7b859ad8cbb88cd6f37520d8b"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.797992 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6f2454f5333ab911eebfd64bf0a3fabf18ab1bbf4c865a0ff147603146a0da7"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.797998 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30d8c5e11238102663950d2ebd33f9bc42936ba7b859ad8cbb88cd6f37520d8b"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.798005 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" event={"ID":"8660dad9-43c8-4c00-872a-e00a6baab0f7","Type":"ContainerDied","Data":"4add0094d46275fe3cf880709f994c2de148adc310aa03ed67888ed05f96abd1"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.798011 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6f2454f5333ab911eebfd64bf0a3fabf18ab1bbf4c865a0ff147603146a0da7"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.798017 5124 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30d8c5e11238102663950d2ebd33f9bc42936ba7b859ad8cbb88cd6f37520d8b"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.798122 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.801698 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-smnb7_f826f136-a910-4120-aa62-a08e427590c0/kube-multus/0.log" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.801756 5124 generic.go:358] "Generic (PLEG): container finished" podID="f826f136-a910-4120-aa62-a08e427590c0" containerID="0af4c7adce9ca2591a5e45ed1b33cb8402b5e759836f9fbb681395b39fc0b6d8" exitCode=2 Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.801942 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-smnb7" event={"ID":"f826f136-a910-4120-aa62-a08e427590c0","Type":"ContainerDied","Data":"0af4c7adce9ca2591a5e45ed1b33cb8402b5e759836f9fbb681395b39fc0b6d8"} Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.802414 5124 scope.go:117] "RemoveContainer" containerID="0af4c7adce9ca2591a5e45ed1b33cb8402b5e759836f9fbb681395b39fc0b6d8" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.819988 5124 scope.go:117] "RemoveContainer" containerID="ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.840542 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk"] Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.843744 5124 scope.go:117] "RemoveContainer" containerID="b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.843897 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-mpdlk"] Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.848756 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-ovn-node-metrics-cert\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.848814 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-var-lib-openvswitch\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.848846 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-cni-bin\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.848874 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-run-systemd\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.848934 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-log-socket\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.848985 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-run-ovn-kubernetes\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849016 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-systemd-units\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849042 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-run-openvswitch\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849064 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-run-ovn\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849090 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-env-overrides\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849114 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-ovnkube-config\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849198 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-node-log\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849287 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kstz4\" (UniqueName: \"kubernetes.io/projected/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-kube-api-access-kstz4\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849326 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-run-netns\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849381 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-cni-netd\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849399 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-ovnkube-script-lib\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849478 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849521 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-etc-openvswitch\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849554 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-slash\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849573 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-kubelet\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849704 5124 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849771 5124 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849790 5124 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849799 5124 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849808 5124 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849818 5124 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849827 5124 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849838 5124 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849850 5124 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849858 5124 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849866 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sphjf\" (UniqueName: \"kubernetes.io/projected/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-kube-api-access-sphjf\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849875 5124 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849883 5124 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849891 5124 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849899 5124 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849907 5124 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849915 5124 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849924 5124 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849934 5124 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.849942 5124 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d13181a0-d54a-460b-bbc7-4948fb1a4eaf-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.884386 5124 scope.go:117] "RemoveContainer" containerID="7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.899965 5124 scope.go:117] "RemoveContainer" containerID="5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.924675 5124 scope.go:117] "RemoveContainer" containerID="0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.943529 5124 scope.go:117] "RemoveContainer" containerID="a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.950974 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-run-ovn-kubernetes\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951031 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-systemd-units\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951066 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-run-openvswitch\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951092 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-run-ovn\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951115 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-env-overrides\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951251 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-ovnkube-config\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951122 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-run-ovn-kubernetes\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951194 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-run-ovn\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951205 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-run-openvswitch\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951191 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-systemd-units\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951447 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-node-log\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951490 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-node-log\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951514 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kstz4\" (UniqueName: \"kubernetes.io/projected/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-kube-api-access-kstz4\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951546 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-run-netns\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951578 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-cni-netd\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951619 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-ovnkube-script-lib\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951670 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951700 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-etc-openvswitch\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951726 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-slash\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951745 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-kubelet\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951774 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-ovn-node-metrics-cert\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951799 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-var-lib-openvswitch\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951830 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-cni-bin\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951854 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-run-systemd\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951873 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-log-socket\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951942 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-log-socket\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951975 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-slash\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.952012 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-var-lib-openvswitch\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.952050 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-cni-bin\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.952080 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-run-systemd\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.952414 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.952514 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-etc-openvswitch\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.951981 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-kubelet\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.952620 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-run-netns\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.952660 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-host-cni-netd\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.952742 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-ovnkube-script-lib\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.952742 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-env-overrides\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.952853 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-ovnkube-config\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.957642 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-ovn-node-metrics-cert\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.964225 5124 scope.go:117] "RemoveContainer" containerID="0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.969946 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kstz4\" (UniqueName: \"kubernetes.io/projected/8d55d96b-7f79-4b99-add1-b38c6cb96f5e-kube-api-access-kstz4\") pod \"ovnkube-node-2zwq5\" (UID: \"8d55d96b-7f79-4b99-add1-b38c6cb96f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:50 crc kubenswrapper[5124]: I0126 00:19:50.988364 5124 scope.go:117] "RemoveContainer" containerID="30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.010304 5124 scope.go:117] "RemoveContainer" containerID="b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768" Jan 26 00:19:51 crc kubenswrapper[5124]: E0126 00:19:51.013852 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768\": container with ID starting with b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768 not found: ID does not exist" containerID="b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.013900 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768"} err="failed to get container status \"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768\": rpc error: code = NotFound desc = could not find container \"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768\": container with ID starting with b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.013935 5124 scope.go:117] "RemoveContainer" containerID="ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35" Jan 26 00:19:51 crc kubenswrapper[5124]: E0126 00:19:51.014214 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35\": container with ID starting with ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35 not found: ID does not exist" containerID="ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.014246 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35"} err="failed to get container status \"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35\": rpc error: code = NotFound desc = could not find container \"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35\": container with ID starting with ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.014264 5124 scope.go:117] "RemoveContainer" containerID="b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b" Jan 26 00:19:51 crc kubenswrapper[5124]: E0126 00:19:51.014608 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b\": container with ID starting with b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b not found: ID does not exist" containerID="b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.014630 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b"} err="failed to get container status \"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b\": rpc error: code = NotFound desc = could not find container \"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b\": container with ID starting with b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.014647 5124 scope.go:117] "RemoveContainer" containerID="7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb" Jan 26 00:19:51 crc kubenswrapper[5124]: E0126 00:19:51.014848 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb\": container with ID starting with 7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb not found: ID does not exist" containerID="7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.014869 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb"} err="failed to get container status \"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb\": rpc error: code = NotFound desc = could not find container \"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb\": container with ID starting with 7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.014889 5124 scope.go:117] "RemoveContainer" containerID="5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85" Jan 26 00:19:51 crc kubenswrapper[5124]: E0126 00:19:51.015085 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85\": container with ID starting with 5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85 not found: ID does not exist" containerID="5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.015112 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85"} err="failed to get container status \"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85\": rpc error: code = NotFound desc = could not find container \"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85\": container with ID starting with 5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.015134 5124 scope.go:117] "RemoveContainer" containerID="0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98" Jan 26 00:19:51 crc kubenswrapper[5124]: E0126 00:19:51.015475 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98\": container with ID starting with 0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98 not found: ID does not exist" containerID="0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.015505 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98"} err="failed to get container status \"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98\": rpc error: code = NotFound desc = could not find container \"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98\": container with ID starting with 0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.015523 5124 scope.go:117] "RemoveContainer" containerID="a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33" Jan 26 00:19:51 crc kubenswrapper[5124]: E0126 00:19:51.016023 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33\": container with ID starting with a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33 not found: ID does not exist" containerID="a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.016054 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33"} err="failed to get container status \"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33\": rpc error: code = NotFound desc = could not find container \"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33\": container with ID starting with a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.016071 5124 scope.go:117] "RemoveContainer" containerID="0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd" Jan 26 00:19:51 crc kubenswrapper[5124]: E0126 00:19:51.016375 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd\": container with ID starting with 0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd not found: ID does not exist" containerID="0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.016408 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd"} err="failed to get container status \"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd\": rpc error: code = NotFound desc = could not find container \"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd\": container with ID starting with 0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.016430 5124 scope.go:117] "RemoveContainer" containerID="30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f" Jan 26 00:19:51 crc kubenswrapper[5124]: E0126 00:19:51.016912 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f\": container with ID starting with 30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f not found: ID does not exist" containerID="30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.016978 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f"} err="failed to get container status \"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f\": rpc error: code = NotFound desc = could not find container \"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f\": container with ID starting with 30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.017027 5124 scope.go:117] "RemoveContainer" containerID="b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.017393 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768"} err="failed to get container status \"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768\": rpc error: code = NotFound desc = could not find container \"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768\": container with ID starting with b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.017448 5124 scope.go:117] "RemoveContainer" containerID="ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.018486 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35"} err="failed to get container status \"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35\": rpc error: code = NotFound desc = could not find container \"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35\": container with ID starting with ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.018530 5124 scope.go:117] "RemoveContainer" containerID="b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.018891 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b"} err="failed to get container status \"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b\": rpc error: code = NotFound desc = could not find container \"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b\": container with ID starting with b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.018916 5124 scope.go:117] "RemoveContainer" containerID="7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.019279 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb"} err="failed to get container status \"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb\": rpc error: code = NotFound desc = could not find container \"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb\": container with ID starting with 7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.019311 5124 scope.go:117] "RemoveContainer" containerID="5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.019642 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85"} err="failed to get container status \"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85\": rpc error: code = NotFound desc = could not find container \"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85\": container with ID starting with 5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.019667 5124 scope.go:117] "RemoveContainer" containerID="0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.019926 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98"} err="failed to get container status \"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98\": rpc error: code = NotFound desc = could not find container \"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98\": container with ID starting with 0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.019951 5124 scope.go:117] "RemoveContainer" containerID="a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.020236 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33"} err="failed to get container status \"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33\": rpc error: code = NotFound desc = could not find container \"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33\": container with ID starting with a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.020260 5124 scope.go:117] "RemoveContainer" containerID="0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.020482 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd"} err="failed to get container status \"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd\": rpc error: code = NotFound desc = could not find container \"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd\": container with ID starting with 0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.020519 5124 scope.go:117] "RemoveContainer" containerID="30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.020731 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f"} err="failed to get container status \"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f\": rpc error: code = NotFound desc = could not find container \"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f\": container with ID starting with 30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.020933 5124 scope.go:117] "RemoveContainer" containerID="b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.021195 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768"} err="failed to get container status \"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768\": rpc error: code = NotFound desc = could not find container \"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768\": container with ID starting with b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.021221 5124 scope.go:117] "RemoveContainer" containerID="ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.021420 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35"} err="failed to get container status \"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35\": rpc error: code = NotFound desc = could not find container \"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35\": container with ID starting with ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.021446 5124 scope.go:117] "RemoveContainer" containerID="b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.021791 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b"} err="failed to get container status \"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b\": rpc error: code = NotFound desc = could not find container \"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b\": container with ID starting with b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.021821 5124 scope.go:117] "RemoveContainer" containerID="7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.022019 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb"} err="failed to get container status \"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb\": rpc error: code = NotFound desc = could not find container \"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb\": container with ID starting with 7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.022044 5124 scope.go:117] "RemoveContainer" containerID="5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.022328 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85"} err="failed to get container status \"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85\": rpc error: code = NotFound desc = could not find container \"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85\": container with ID starting with 5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.022355 5124 scope.go:117] "RemoveContainer" containerID="0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.022536 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98"} err="failed to get container status \"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98\": rpc error: code = NotFound desc = could not find container \"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98\": container with ID starting with 0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.022561 5124 scope.go:117] "RemoveContainer" containerID="a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.022810 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33"} err="failed to get container status \"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33\": rpc error: code = NotFound desc = could not find container \"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33\": container with ID starting with a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.022856 5124 scope.go:117] "RemoveContainer" containerID="0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.023057 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd"} err="failed to get container status \"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd\": rpc error: code = NotFound desc = could not find container \"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd\": container with ID starting with 0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.023082 5124 scope.go:117] "RemoveContainer" containerID="30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.023259 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f"} err="failed to get container status \"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f\": rpc error: code = NotFound desc = could not find container \"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f\": container with ID starting with 30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.023282 5124 scope.go:117] "RemoveContainer" containerID="b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.023456 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768"} err="failed to get container status \"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768\": rpc error: code = NotFound desc = could not find container \"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768\": container with ID starting with b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.023480 5124 scope.go:117] "RemoveContainer" containerID="ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.023671 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35"} err="failed to get container status \"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35\": rpc error: code = NotFound desc = could not find container \"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35\": container with ID starting with ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.023688 5124 scope.go:117] "RemoveContainer" containerID="b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.024148 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b"} err="failed to get container status \"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b\": rpc error: code = NotFound desc = could not find container \"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b\": container with ID starting with b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.024186 5124 scope.go:117] "RemoveContainer" containerID="7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.024526 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb"} err="failed to get container status \"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb\": rpc error: code = NotFound desc = could not find container \"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb\": container with ID starting with 7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.024575 5124 scope.go:117] "RemoveContainer" containerID="5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.024976 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85"} err="failed to get container status \"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85\": rpc error: code = NotFound desc = could not find container \"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85\": container with ID starting with 5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.025015 5124 scope.go:117] "RemoveContainer" containerID="0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.025319 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98"} err="failed to get container status \"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98\": rpc error: code = NotFound desc = could not find container \"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98\": container with ID starting with 0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.025349 5124 scope.go:117] "RemoveContainer" containerID="a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.025612 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33"} err="failed to get container status \"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33\": rpc error: code = NotFound desc = could not find container \"a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33\": container with ID starting with a739bd25adba363ac8e62a851d5bbc4e0970ab2b4d947f6b0abcb988e9b8ae33 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.025639 5124 scope.go:117] "RemoveContainer" containerID="0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.026045 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd"} err="failed to get container status \"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd\": rpc error: code = NotFound desc = could not find container \"0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd\": container with ID starting with 0846e7a97039a12d11ae54a129bb1cc8d22304487515a089342a29f2e46c54cd not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.026058 5124 scope.go:117] "RemoveContainer" containerID="30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.026713 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f"} err="failed to get container status \"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f\": rpc error: code = NotFound desc = could not find container \"30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f\": container with ID starting with 30bcd38ee5002aec5c579da18a01b6eea73a299bcf98658882e25e67b70e339f not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.026799 5124 scope.go:117] "RemoveContainer" containerID="b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.027209 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768"} err="failed to get container status \"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768\": rpc error: code = NotFound desc = could not find container \"b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768\": container with ID starting with b88d112d1a5a62e96208e9742a4f115e993356d2fe5cbb8114638a70a7504768 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.027243 5124 scope.go:117] "RemoveContainer" containerID="ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.027673 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35"} err="failed to get container status \"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35\": rpc error: code = NotFound desc = could not find container \"ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35\": container with ID starting with ca1b69bade3b1295f64aadb4876cc913493c85d40031cbb54db17ed26dd59b35 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.027697 5124 scope.go:117] "RemoveContainer" containerID="b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.028181 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b"} err="failed to get container status \"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b\": rpc error: code = NotFound desc = could not find container \"b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b\": container with ID starting with b5f0b719809cb0822f685a25e50f161bb2ebc5cf1c23741f70dd758ddb876b3b not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.028209 5124 scope.go:117] "RemoveContainer" containerID="7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.028508 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb"} err="failed to get container status \"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb\": rpc error: code = NotFound desc = could not find container \"7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb\": container with ID starting with 7bc933e302fe556f3f2333aabcc9a3d08a97facf76c1e513bf999a1d988e23fb not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.028530 5124 scope.go:117] "RemoveContainer" containerID="5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.028892 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85"} err="failed to get container status \"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85\": rpc error: code = NotFound desc = could not find container \"5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85\": container with ID starting with 5d0e243efad4a7977ff479d31ca346032d0b27e840a55f1e5a5d7bb273240f85 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.028915 5124 scope.go:117] "RemoveContainer" containerID="0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.029124 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98"} err="failed to get container status \"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98\": rpc error: code = NotFound desc = could not find container \"0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98\": container with ID starting with 0d77c5aa52d9865a0987d699842af27c3031d5bc2f5c315f2214c950b8209d98 not found: ID does not exist" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.090908 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.149059 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sdh5t"] Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.152823 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sdh5t"] Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.813181 5124 generic.go:358] "Generic (PLEG): container finished" podID="8d55d96b-7f79-4b99-add1-b38c6cb96f5e" containerID="13d5d0faccd77900d8dc8fcb4752a923aee17450d805dbabde75c464e35c3d3c" exitCode=0 Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.813292 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" event={"ID":"8d55d96b-7f79-4b99-add1-b38c6cb96f5e","Type":"ContainerDied","Data":"13d5d0faccd77900d8dc8fcb4752a923aee17450d805dbabde75c464e35c3d3c"} Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.813347 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" event={"ID":"8d55d96b-7f79-4b99-add1-b38c6cb96f5e","Type":"ContainerStarted","Data":"3e0ccd172a033bc9bceb656b5b0e280077b23997bd38e10fb55b34ee3e59cbc4"} Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.817358 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-smnb7_f826f136-a910-4120-aa62-a08e427590c0/kube-multus/0.log" Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.817790 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-smnb7" event={"ID":"f826f136-a910-4120-aa62-a08e427590c0","Type":"ContainerStarted","Data":"818731565d363146f556d3dd81d47e76659d1d6c1c79ecfb18747d7f4e12de08"} Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.819197 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" event={"ID":"ec31e507-1a30-4028-b078-0686d3cedc4e","Type":"ContainerStarted","Data":"3874d8db71318c1bfd0eadb031bb5ef8afc357daa6cc075f510a385e92739588"} Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.819224 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" event={"ID":"ec31e507-1a30-4028-b078-0686d3cedc4e","Type":"ContainerStarted","Data":"37668c32b2839c8bfafa8194538027176664db2c94388f3acd1e9b083f9ca5af"} Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.819233 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" event={"ID":"ec31e507-1a30-4028-b078-0686d3cedc4e","Type":"ContainerStarted","Data":"e818d743f99d8dbdc86d947f8da8944495ecc54b0e48813633b81eaeb5902359"} Jan 26 00:19:51 crc kubenswrapper[5124]: I0126 00:19:51.902766 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wt48j" podStartSLOduration=2.902745194 podStartE2EDuration="2.902745194s" podCreationTimestamp="2026-01-26 00:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:19:51.902115457 +0000 UTC m=+669.811034816" watchObservedRunningTime="2026-01-26 00:19:51.902745194 +0000 UTC m=+669.811664543" Jan 26 00:19:52 crc kubenswrapper[5124]: I0126 00:19:52.375478 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8660dad9-43c8-4c00-872a-e00a6baab0f7" path="/var/lib/kubelet/pods/8660dad9-43c8-4c00-872a-e00a6baab0f7/volumes" Jan 26 00:19:52 crc kubenswrapper[5124]: I0126 00:19:52.376649 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d13181a0-d54a-460b-bbc7-4948fb1a4eaf" path="/var/lib/kubelet/pods/d13181a0-d54a-460b-bbc7-4948fb1a4eaf/volumes" Jan 26 00:19:52 crc kubenswrapper[5124]: I0126 00:19:52.827494 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" event={"ID":"8d55d96b-7f79-4b99-add1-b38c6cb96f5e","Type":"ContainerStarted","Data":"015c60cbafc639cd7c4702e8179f50c506eb1f38eb59e81c3ff5513667f75803"} Jan 26 00:19:52 crc kubenswrapper[5124]: I0126 00:19:52.827551 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" event={"ID":"8d55d96b-7f79-4b99-add1-b38c6cb96f5e","Type":"ContainerStarted","Data":"7a572f869bc1ae0fbd73d4610408e1e683e291b99c9408cbe5e2fcdb275ad83c"} Jan 26 00:19:52 crc kubenswrapper[5124]: I0126 00:19:52.827568 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" event={"ID":"8d55d96b-7f79-4b99-add1-b38c6cb96f5e","Type":"ContainerStarted","Data":"91ddede6e6b186f1083053f7addad92a202f3a304cd8e0950bfa9a8bae5279f6"} Jan 26 00:19:52 crc kubenswrapper[5124]: I0126 00:19:52.827606 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" event={"ID":"8d55d96b-7f79-4b99-add1-b38c6cb96f5e","Type":"ContainerStarted","Data":"a6b07cda0683b07ea1e652e2c5ca6af7eeccf89c1a8a3c7f0c8b51fb5645bf49"} Jan 26 00:19:53 crc kubenswrapper[5124]: I0126 00:19:53.838011 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" event={"ID":"8d55d96b-7f79-4b99-add1-b38c6cb96f5e","Type":"ContainerStarted","Data":"d8854022aece47c18ac4a8bfc0c079aed34dac971f686b3f9603772d31b8a754"} Jan 26 00:19:53 crc kubenswrapper[5124]: I0126 00:19:53.838421 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" event={"ID":"8d55d96b-7f79-4b99-add1-b38c6cb96f5e","Type":"ContainerStarted","Data":"c0831368e50e44668fc7df9f0f0c866efd0e39ac0cc13762cdcc8f2e4a808e9b"} Jan 26 00:19:55 crc kubenswrapper[5124]: I0126 00:19:55.851206 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" event={"ID":"8d55d96b-7f79-4b99-add1-b38c6cb96f5e","Type":"ContainerStarted","Data":"5b55ffa170e5653ab6bfb332ce5c2e7f96be0085ec1726e2ef1ebb2c08f4863f"} Jan 26 00:19:59 crc kubenswrapper[5124]: I0126 00:19:59.877628 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" event={"ID":"8d55d96b-7f79-4b99-add1-b38c6cb96f5e","Type":"ContainerStarted","Data":"a8b9d4597b5040d08a04551fc7f11497221147f0d35b8b03e9735e2a0e3edec7"} Jan 26 00:19:59 crc kubenswrapper[5124]: I0126 00:19:59.878165 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:59 crc kubenswrapper[5124]: I0126 00:19:59.878181 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:59 crc kubenswrapper[5124]: I0126 00:19:59.878192 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:59 crc kubenswrapper[5124]: I0126 00:19:59.908262 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:19:59 crc kubenswrapper[5124]: I0126 00:19:59.914621 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" podStartSLOduration=9.914599584 podStartE2EDuration="9.914599584s" podCreationTimestamp="2026-01-26 00:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:19:59.908088166 +0000 UTC m=+677.817007525" watchObservedRunningTime="2026-01-26 00:19:59.914599584 +0000 UTC m=+677.823518933" Jan 26 00:19:59 crc kubenswrapper[5124]: I0126 00:19:59.923541 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:20:00 crc kubenswrapper[5124]: I0126 00:20:00.132467 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489780-qvnlc"] Jan 26 00:20:00 crc kubenswrapper[5124]: I0126 00:20:00.932473 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:00 crc kubenswrapper[5124]: I0126 00:20:00.935413 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-26tfw\"" Jan 26 00:20:00 crc kubenswrapper[5124]: I0126 00:20:00.935615 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:20:00 crc kubenswrapper[5124]: I0126 00:20:00.935645 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:20:00 crc kubenswrapper[5124]: I0126 00:20:00.948100 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-qvnlc"] Jan 26 00:20:01 crc kubenswrapper[5124]: I0126 00:20:01.077944 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjtzd\" (UniqueName: \"kubernetes.io/projected/c7cbfe39-767f-4343-96cd-cda76678d60c-kube-api-access-gjtzd\") pod \"auto-csr-approver-29489780-qvnlc\" (UID: \"c7cbfe39-767f-4343-96cd-cda76678d60c\") " pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:01 crc kubenswrapper[5124]: I0126 00:20:01.180955 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gjtzd\" (UniqueName: \"kubernetes.io/projected/c7cbfe39-767f-4343-96cd-cda76678d60c-kube-api-access-gjtzd\") pod \"auto-csr-approver-29489780-qvnlc\" (UID: \"c7cbfe39-767f-4343-96cd-cda76678d60c\") " pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:01 crc kubenswrapper[5124]: I0126 00:20:01.203009 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjtzd\" (UniqueName: \"kubernetes.io/projected/c7cbfe39-767f-4343-96cd-cda76678d60c-kube-api-access-gjtzd\") pod \"auto-csr-approver-29489780-qvnlc\" (UID: \"c7cbfe39-767f-4343-96cd-cda76678d60c\") " pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:01 crc kubenswrapper[5124]: I0126 00:20:01.251280 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:01 crc kubenswrapper[5124]: E0126 00:20:01.280741 5124 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29489780-qvnlc_openshift-infra_c7cbfe39-767f-4343-96cd-cda76678d60c_0(2179e6c8514bc6e38350415da3cf754f105014ab949ac414d635e6a509ad2e71): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 00:20:01 crc kubenswrapper[5124]: E0126 00:20:01.280814 5124 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29489780-qvnlc_openshift-infra_c7cbfe39-767f-4343-96cd-cda76678d60c_0(2179e6c8514bc6e38350415da3cf754f105014ab949ac414d635e6a509ad2e71): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:01 crc kubenswrapper[5124]: E0126 00:20:01.280837 5124 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29489780-qvnlc_openshift-infra_c7cbfe39-767f-4343-96cd-cda76678d60c_0(2179e6c8514bc6e38350415da3cf754f105014ab949ac414d635e6a509ad2e71): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:01 crc kubenswrapper[5124]: E0126 00:20:01.280890 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29489780-qvnlc_openshift-infra(c7cbfe39-767f-4343-96cd-cda76678d60c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29489780-qvnlc_openshift-infra(c7cbfe39-767f-4343-96cd-cda76678d60c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29489780-qvnlc_openshift-infra_c7cbfe39-767f-4343-96cd-cda76678d60c_0(2179e6c8514bc6e38350415da3cf754f105014ab949ac414d635e6a509ad2e71): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" podUID="c7cbfe39-767f-4343-96cd-cda76678d60c" Jan 26 00:20:01 crc kubenswrapper[5124]: I0126 00:20:01.888389 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:01 crc kubenswrapper[5124]: I0126 00:20:01.889029 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:01 crc kubenswrapper[5124]: E0126 00:20:01.912992 5124 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29489780-qvnlc_openshift-infra_c7cbfe39-767f-4343-96cd-cda76678d60c_0(eec684718467f4ee0d294ee182ccb7ae445b4d82ff2e1f3fb59d2fab177a6fc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 00:20:01 crc kubenswrapper[5124]: E0126 00:20:01.913070 5124 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29489780-qvnlc_openshift-infra_c7cbfe39-767f-4343-96cd-cda76678d60c_0(eec684718467f4ee0d294ee182ccb7ae445b4d82ff2e1f3fb59d2fab177a6fc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:01 crc kubenswrapper[5124]: E0126 00:20:01.913095 5124 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29489780-qvnlc_openshift-infra_c7cbfe39-767f-4343-96cd-cda76678d60c_0(eec684718467f4ee0d294ee182ccb7ae445b4d82ff2e1f3fb59d2fab177a6fc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:01 crc kubenswrapper[5124]: E0126 00:20:01.913153 5124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29489780-qvnlc_openshift-infra(c7cbfe39-767f-4343-96cd-cda76678d60c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29489780-qvnlc_openshift-infra(c7cbfe39-767f-4343-96cd-cda76678d60c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29489780-qvnlc_openshift-infra_c7cbfe39-767f-4343-96cd-cda76678d60c_0(eec684718467f4ee0d294ee182ccb7ae445b4d82ff2e1f3fb59d2fab177a6fc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" podUID="c7cbfe39-767f-4343-96cd-cda76678d60c" Jan 26 00:20:10 crc kubenswrapper[5124]: I0126 00:20:10.830071 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:20:10 crc kubenswrapper[5124]: I0126 00:20:10.830426 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:20:13 crc kubenswrapper[5124]: I0126 00:20:13.365307 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:13 crc kubenswrapper[5124]: I0126 00:20:13.366135 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:13 crc kubenswrapper[5124]: I0126 00:20:13.532821 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-qvnlc"] Jan 26 00:20:13 crc kubenswrapper[5124]: W0126 00:20:13.539635 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7cbfe39_767f_4343_96cd_cda76678d60c.slice/crio-186f0d17259f801975f6c073e3c775346d16bbb373e6bd236b708727300a8fba WatchSource:0}: Error finding container 186f0d17259f801975f6c073e3c775346d16bbb373e6bd236b708727300a8fba: Status 404 returned error can't find the container with id 186f0d17259f801975f6c073e3c775346d16bbb373e6bd236b708727300a8fba Jan 26 00:20:13 crc kubenswrapper[5124]: I0126 00:20:13.949911 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" event={"ID":"c7cbfe39-767f-4343-96cd-cda76678d60c","Type":"ContainerStarted","Data":"186f0d17259f801975f6c073e3c775346d16bbb373e6bd236b708727300a8fba"} Jan 26 00:20:15 crc kubenswrapper[5124]: I0126 00:20:15.961730 5124 generic.go:358] "Generic (PLEG): container finished" podID="c7cbfe39-767f-4343-96cd-cda76678d60c" containerID="a6cc4c7c30d62521c22daa2e1c43e9bab237c5a29aa4c1e42b8e975ba4af144b" exitCode=0 Jan 26 00:20:15 crc kubenswrapper[5124]: I0126 00:20:15.961819 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" event={"ID":"c7cbfe39-767f-4343-96cd-cda76678d60c","Type":"ContainerDied","Data":"a6cc4c7c30d62521c22daa2e1c43e9bab237c5a29aa4c1e42b8e975ba4af144b"} Jan 26 00:20:17 crc kubenswrapper[5124]: I0126 00:20:17.171603 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:17 crc kubenswrapper[5124]: I0126 00:20:17.187216 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjtzd\" (UniqueName: \"kubernetes.io/projected/c7cbfe39-767f-4343-96cd-cda76678d60c-kube-api-access-gjtzd\") pod \"c7cbfe39-767f-4343-96cd-cda76678d60c\" (UID: \"c7cbfe39-767f-4343-96cd-cda76678d60c\") " Jan 26 00:20:17 crc kubenswrapper[5124]: I0126 00:20:17.194152 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cbfe39-767f-4343-96cd-cda76678d60c-kube-api-access-gjtzd" (OuterVolumeSpecName: "kube-api-access-gjtzd") pod "c7cbfe39-767f-4343-96cd-cda76678d60c" (UID: "c7cbfe39-767f-4343-96cd-cda76678d60c"). InnerVolumeSpecName "kube-api-access-gjtzd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:20:17 crc kubenswrapper[5124]: I0126 00:20:17.288938 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gjtzd\" (UniqueName: \"kubernetes.io/projected/c7cbfe39-767f-4343-96cd-cda76678d60c-kube-api-access-gjtzd\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:17 crc kubenswrapper[5124]: I0126 00:20:17.974717 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" Jan 26 00:20:17 crc kubenswrapper[5124]: I0126 00:20:17.974767 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-qvnlc" event={"ID":"c7cbfe39-767f-4343-96cd-cda76678d60c","Type":"ContainerDied","Data":"186f0d17259f801975f6c073e3c775346d16bbb373e6bd236b708727300a8fba"} Jan 26 00:20:17 crc kubenswrapper[5124]: I0126 00:20:17.974822 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="186f0d17259f801975f6c073e3c775346d16bbb373e6bd236b708727300a8fba" Jan 26 00:20:31 crc kubenswrapper[5124]: I0126 00:20:31.909358 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2zwq5" Jan 26 00:20:40 crc kubenswrapper[5124]: I0126 00:20:40.830063 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:20:40 crc kubenswrapper[5124]: I0126 00:20:40.830896 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:20:40 crc kubenswrapper[5124]: I0126 00:20:40.830965 5124 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:20:40 crc kubenswrapper[5124]: I0126 00:20:40.831918 5124 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf0d2bc539a7272b2b55b13ae5225aa87fa06ada3cce31edaeaa612f3511ce10"} pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:20:40 crc kubenswrapper[5124]: I0126 00:20:40.832007 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" containerID="cri-o://bf0d2bc539a7272b2b55b13ae5225aa87fa06ada3cce31edaeaa612f3511ce10" gracePeriod=600 Jan 26 00:20:41 crc kubenswrapper[5124]: I0126 00:20:41.111916 5124 generic.go:358] "Generic (PLEG): container finished" podID="95fa0656-150a-4d93-a324-77a1306d91f7" containerID="bf0d2bc539a7272b2b55b13ae5225aa87fa06ada3cce31edaeaa612f3511ce10" exitCode=0 Jan 26 00:20:41 crc kubenswrapper[5124]: I0126 00:20:41.112032 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerDied","Data":"bf0d2bc539a7272b2b55b13ae5225aa87fa06ada3cce31edaeaa612f3511ce10"} Jan 26 00:20:41 crc kubenswrapper[5124]: I0126 00:20:41.112078 5124 scope.go:117] "RemoveContainer" containerID="6d673794be664ea88f97aff7d6202b405eb46b2e426b73ffc27f0c5fba62377f" Jan 26 00:20:42 crc kubenswrapper[5124]: I0126 00:20:42.119643 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerStarted","Data":"79635baa3ffeb5e4c69b5bd5a6a7d2d5fea58437cda8cef86f8317b3f38ad143"} Jan 26 00:20:42 crc kubenswrapper[5124]: I0126 00:20:42.776121 5124 scope.go:117] "RemoveContainer" containerID="b6f2454f5333ab911eebfd64bf0a3fabf18ab1bbf4c865a0ff147603146a0da7" Jan 26 00:20:42 crc kubenswrapper[5124]: I0126 00:20:42.791575 5124 scope.go:117] "RemoveContainer" containerID="30d8c5e11238102663950d2ebd33f9bc42936ba7b859ad8cbb88cd6f37520d8b" Jan 26 00:21:02 crc kubenswrapper[5124]: I0126 00:21:02.192014 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cbbm"] Jan 26 00:21:02 crc kubenswrapper[5124]: I0126 00:21:02.193181 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8cbbm" podUID="9990d252-13eb-476d-a56a-6f40fad4a3a3" containerName="registry-server" containerID="cri-o://52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9" gracePeriod=30 Jan 26 00:21:02 crc kubenswrapper[5124]: I0126 00:21:02.644913 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:21:02 crc kubenswrapper[5124]: I0126 00:21:02.802130 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9990d252-13eb-476d-a56a-6f40fad4a3a3-utilities\") pod \"9990d252-13eb-476d-a56a-6f40fad4a3a3\" (UID: \"9990d252-13eb-476d-a56a-6f40fad4a3a3\") " Jan 26 00:21:02 crc kubenswrapper[5124]: I0126 00:21:02.802264 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92b5p\" (UniqueName: \"kubernetes.io/projected/9990d252-13eb-476d-a56a-6f40fad4a3a3-kube-api-access-92b5p\") pod \"9990d252-13eb-476d-a56a-6f40fad4a3a3\" (UID: \"9990d252-13eb-476d-a56a-6f40fad4a3a3\") " Jan 26 00:21:02 crc kubenswrapper[5124]: I0126 00:21:02.802290 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9990d252-13eb-476d-a56a-6f40fad4a3a3-catalog-content\") pod \"9990d252-13eb-476d-a56a-6f40fad4a3a3\" (UID: \"9990d252-13eb-476d-a56a-6f40fad4a3a3\") " Jan 26 00:21:02 crc kubenswrapper[5124]: I0126 00:21:02.804634 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9990d252-13eb-476d-a56a-6f40fad4a3a3-utilities" (OuterVolumeSpecName: "utilities") pod "9990d252-13eb-476d-a56a-6f40fad4a3a3" (UID: "9990d252-13eb-476d-a56a-6f40fad4a3a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:02 crc kubenswrapper[5124]: I0126 00:21:02.814975 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9990d252-13eb-476d-a56a-6f40fad4a3a3-kube-api-access-92b5p" (OuterVolumeSpecName: "kube-api-access-92b5p") pod "9990d252-13eb-476d-a56a-6f40fad4a3a3" (UID: "9990d252-13eb-476d-a56a-6f40fad4a3a3"). InnerVolumeSpecName "kube-api-access-92b5p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:21:02 crc kubenswrapper[5124]: I0126 00:21:02.822004 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9990d252-13eb-476d-a56a-6f40fad4a3a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9990d252-13eb-476d-a56a-6f40fad4a3a3" (UID: "9990d252-13eb-476d-a56a-6f40fad4a3a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:02 crc kubenswrapper[5124]: I0126 00:21:02.903448 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9990d252-13eb-476d-a56a-6f40fad4a3a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:02 crc kubenswrapper[5124]: I0126 00:21:02.903502 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-92b5p\" (UniqueName: \"kubernetes.io/projected/9990d252-13eb-476d-a56a-6f40fad4a3a3-kube-api-access-92b5p\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:02 crc kubenswrapper[5124]: I0126 00:21:02.903515 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9990d252-13eb-476d-a56a-6f40fad4a3a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.252913 5124 generic.go:358] "Generic (PLEG): container finished" podID="9990d252-13eb-476d-a56a-6f40fad4a3a3" containerID="52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9" exitCode=0 Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.253042 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cbbm" event={"ID":"9990d252-13eb-476d-a56a-6f40fad4a3a3","Type":"ContainerDied","Data":"52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9"} Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.253116 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cbbm" event={"ID":"9990d252-13eb-476d-a56a-6f40fad4a3a3","Type":"ContainerDied","Data":"4802644de59bed6c8d093039a38f958b68d5e1cc5fe7f848cd413cdd68b9e6b1"} Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.253142 5124 scope.go:117] "RemoveContainer" containerID="52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9" Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.253141 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cbbm" Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.279872 5124 scope.go:117] "RemoveContainer" containerID="99be3caa5ebf3f1090d443f1d0b283a815f74904e915d1431dcfe7679029ddea" Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.307572 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cbbm"] Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.315214 5124 scope.go:117] "RemoveContainer" containerID="242ead44f4179a1a918ed390417c22a41292d646256eaa2e8f62ed1735bb8f1d" Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.316444 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cbbm"] Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.347119 5124 scope.go:117] "RemoveContainer" containerID="52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9" Jan 26 00:21:03 crc kubenswrapper[5124]: E0126 00:21:03.347604 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9\": container with ID starting with 52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9 not found: ID does not exist" containerID="52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9" Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.347645 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9"} err="failed to get container status \"52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9\": rpc error: code = NotFound desc = could not find container \"52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9\": container with ID starting with 52fc3f19607f66e9171c223b89b0829a4903bfc0696f6cd63e0c3593df26bbc9 not found: ID does not exist" Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.347677 5124 scope.go:117] "RemoveContainer" containerID="99be3caa5ebf3f1090d443f1d0b283a815f74904e915d1431dcfe7679029ddea" Jan 26 00:21:03 crc kubenswrapper[5124]: E0126 00:21:03.347932 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99be3caa5ebf3f1090d443f1d0b283a815f74904e915d1431dcfe7679029ddea\": container with ID starting with 99be3caa5ebf3f1090d443f1d0b283a815f74904e915d1431dcfe7679029ddea not found: ID does not exist" containerID="99be3caa5ebf3f1090d443f1d0b283a815f74904e915d1431dcfe7679029ddea" Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.348004 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be3caa5ebf3f1090d443f1d0b283a815f74904e915d1431dcfe7679029ddea"} err="failed to get container status \"99be3caa5ebf3f1090d443f1d0b283a815f74904e915d1431dcfe7679029ddea\": rpc error: code = NotFound desc = could not find container \"99be3caa5ebf3f1090d443f1d0b283a815f74904e915d1431dcfe7679029ddea\": container with ID starting with 99be3caa5ebf3f1090d443f1d0b283a815f74904e915d1431dcfe7679029ddea not found: ID does not exist" Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.348017 5124 scope.go:117] "RemoveContainer" containerID="242ead44f4179a1a918ed390417c22a41292d646256eaa2e8f62ed1735bb8f1d" Jan 26 00:21:03 crc kubenswrapper[5124]: E0126 00:21:03.348331 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"242ead44f4179a1a918ed390417c22a41292d646256eaa2e8f62ed1735bb8f1d\": container with ID starting with 242ead44f4179a1a918ed390417c22a41292d646256eaa2e8f62ed1735bb8f1d not found: ID does not exist" containerID="242ead44f4179a1a918ed390417c22a41292d646256eaa2e8f62ed1735bb8f1d" Jan 26 00:21:03 crc kubenswrapper[5124]: I0126 00:21:03.348353 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242ead44f4179a1a918ed390417c22a41292d646256eaa2e8f62ed1735bb8f1d"} err="failed to get container status \"242ead44f4179a1a918ed390417c22a41292d646256eaa2e8f62ed1735bb8f1d\": rpc error: code = NotFound desc = could not find container \"242ead44f4179a1a918ed390417c22a41292d646256eaa2e8f62ed1735bb8f1d\": container with ID starting with 242ead44f4179a1a918ed390417c22a41292d646256eaa2e8f62ed1735bb8f1d not found: ID does not exist" Jan 26 00:21:04 crc kubenswrapper[5124]: I0126 00:21:04.380993 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9990d252-13eb-476d-a56a-6f40fad4a3a3" path="/var/lib/kubelet/pods/9990d252-13eb-476d-a56a-6f40fad4a3a3/volumes" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.335147 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q"] Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.336070 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9990d252-13eb-476d-a56a-6f40fad4a3a3" containerName="extract-utilities" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.336084 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="9990d252-13eb-476d-a56a-6f40fad4a3a3" containerName="extract-utilities" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.336093 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9990d252-13eb-476d-a56a-6f40fad4a3a3" containerName="extract-content" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.336101 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="9990d252-13eb-476d-a56a-6f40fad4a3a3" containerName="extract-content" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.336128 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7cbfe39-767f-4343-96cd-cda76678d60c" containerName="oc" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.336135 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cbfe39-767f-4343-96cd-cda76678d60c" containerName="oc" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.336145 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9990d252-13eb-476d-a56a-6f40fad4a3a3" containerName="registry-server" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.336152 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="9990d252-13eb-476d-a56a-6f40fad4a3a3" containerName="registry-server" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.336269 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7cbfe39-767f-4343-96cd-cda76678d60c" containerName="oc" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.336289 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="9990d252-13eb-476d-a56a-6f40fad4a3a3" containerName="registry-server" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.339730 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.343787 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.353805 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q\" (UID: \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.354385 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glnm7\" (UniqueName: \"kubernetes.io/projected/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-kube-api-access-glnm7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q\" (UID: \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.354663 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q\" (UID: \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.353836 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q"] Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.456664 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q\" (UID: \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.456752 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-glnm7\" (UniqueName: \"kubernetes.io/projected/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-kube-api-access-glnm7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q\" (UID: \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.456795 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q\" (UID: \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.457183 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q\" (UID: \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.457485 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q\" (UID: \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.485821 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-glnm7\" (UniqueName: \"kubernetes.io/projected/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-kube-api-access-glnm7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q\" (UID: \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.657097 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:06 crc kubenswrapper[5124]: I0126 00:21:06.910012 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q"] Jan 26 00:21:07 crc kubenswrapper[5124]: I0126 00:21:07.281763 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" event={"ID":"a4aff954-1afc-4dd4-8935-fa0cc1cebec6","Type":"ContainerStarted","Data":"fc8de586217df15b253ec245381c02f74e2a08542acc9af77037fcaa9ac98afa"} Jan 26 00:21:07 crc kubenswrapper[5124]: I0126 00:21:07.281816 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" event={"ID":"a4aff954-1afc-4dd4-8935-fa0cc1cebec6","Type":"ContainerStarted","Data":"7424175a8869cc5a42776847b1eff00b967328dd8317f43036b7c6dc54b15b33"} Jan 26 00:21:08 crc kubenswrapper[5124]: I0126 00:21:08.294079 5124 generic.go:358] "Generic (PLEG): container finished" podID="a4aff954-1afc-4dd4-8935-fa0cc1cebec6" containerID="fc8de586217df15b253ec245381c02f74e2a08542acc9af77037fcaa9ac98afa" exitCode=0 Jan 26 00:21:08 crc kubenswrapper[5124]: I0126 00:21:08.294292 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" event={"ID":"a4aff954-1afc-4dd4-8935-fa0cc1cebec6","Type":"ContainerDied","Data":"fc8de586217df15b253ec245381c02f74e2a08542acc9af77037fcaa9ac98afa"} Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.078888 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r9w7l"] Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.105992 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9w7l"] Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.106134 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.192717 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5511fa1-897e-4657-a92d-e3db672371f1-catalog-content\") pod \"redhat-operators-r9w7l\" (UID: \"e5511fa1-897e-4657-a92d-e3db672371f1\") " pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.192766 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5511fa1-897e-4657-a92d-e3db672371f1-utilities\") pod \"redhat-operators-r9w7l\" (UID: \"e5511fa1-897e-4657-a92d-e3db672371f1\") " pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.192804 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcfv6\" (UniqueName: \"kubernetes.io/projected/e5511fa1-897e-4657-a92d-e3db672371f1-kube-api-access-tcfv6\") pod \"redhat-operators-r9w7l\" (UID: \"e5511fa1-897e-4657-a92d-e3db672371f1\") " pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.293859 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5511fa1-897e-4657-a92d-e3db672371f1-catalog-content\") pod \"redhat-operators-r9w7l\" (UID: \"e5511fa1-897e-4657-a92d-e3db672371f1\") " pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.293922 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5511fa1-897e-4657-a92d-e3db672371f1-utilities\") pod \"redhat-operators-r9w7l\" (UID: \"e5511fa1-897e-4657-a92d-e3db672371f1\") " pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.293959 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tcfv6\" (UniqueName: \"kubernetes.io/projected/e5511fa1-897e-4657-a92d-e3db672371f1-kube-api-access-tcfv6\") pod \"redhat-operators-r9w7l\" (UID: \"e5511fa1-897e-4657-a92d-e3db672371f1\") " pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.294614 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5511fa1-897e-4657-a92d-e3db672371f1-catalog-content\") pod \"redhat-operators-r9w7l\" (UID: \"e5511fa1-897e-4657-a92d-e3db672371f1\") " pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.294649 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5511fa1-897e-4657-a92d-e3db672371f1-utilities\") pod \"redhat-operators-r9w7l\" (UID: \"e5511fa1-897e-4657-a92d-e3db672371f1\") " pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.331150 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcfv6\" (UniqueName: \"kubernetes.io/projected/e5511fa1-897e-4657-a92d-e3db672371f1-kube-api-access-tcfv6\") pod \"redhat-operators-r9w7l\" (UID: \"e5511fa1-897e-4657-a92d-e3db672371f1\") " pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.433947 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:09 crc kubenswrapper[5124]: I0126 00:21:09.808486 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9w7l"] Jan 26 00:21:10 crc kubenswrapper[5124]: I0126 00:21:10.308350 5124 generic.go:358] "Generic (PLEG): container finished" podID="a4aff954-1afc-4dd4-8935-fa0cc1cebec6" containerID="526fa2e249628cf2839e5465b1e568ed0abbd9e80184f01b201be87f43d25dd1" exitCode=0 Jan 26 00:21:10 crc kubenswrapper[5124]: I0126 00:21:10.308577 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" event={"ID":"a4aff954-1afc-4dd4-8935-fa0cc1cebec6","Type":"ContainerDied","Data":"526fa2e249628cf2839e5465b1e568ed0abbd9e80184f01b201be87f43d25dd1"} Jan 26 00:21:10 crc kubenswrapper[5124]: I0126 00:21:10.310001 5124 generic.go:358] "Generic (PLEG): container finished" podID="e5511fa1-897e-4657-a92d-e3db672371f1" containerID="3f2347ff2a380fb1cd6cce1302623f719639b55928c37a31880d150c8ba24948" exitCode=0 Jan 26 00:21:10 crc kubenswrapper[5124]: I0126 00:21:10.310036 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9w7l" event={"ID":"e5511fa1-897e-4657-a92d-e3db672371f1","Type":"ContainerDied","Data":"3f2347ff2a380fb1cd6cce1302623f719639b55928c37a31880d150c8ba24948"} Jan 26 00:21:10 crc kubenswrapper[5124]: I0126 00:21:10.310058 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9w7l" event={"ID":"e5511fa1-897e-4657-a92d-e3db672371f1","Type":"ContainerStarted","Data":"0cbf31407b0f2349406878a7c9bf247152ab2d12b39f7668701a4b527a96680b"} Jan 26 00:21:11 crc kubenswrapper[5124]: I0126 00:21:11.324487 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9w7l" event={"ID":"e5511fa1-897e-4657-a92d-e3db672371f1","Type":"ContainerStarted","Data":"8af9f167d994edf5aad0eb52e8af4ae7a9b4e90e71b66a3f70d893f11804d1c3"} Jan 26 00:21:11 crc kubenswrapper[5124]: I0126 00:21:11.327802 5124 generic.go:358] "Generic (PLEG): container finished" podID="a4aff954-1afc-4dd4-8935-fa0cc1cebec6" containerID="a89c9e7866102c0d11fc1ff52c6b755d767f60318f70798f9185149c4f061e2d" exitCode=0 Jan 26 00:21:11 crc kubenswrapper[5124]: I0126 00:21:11.327907 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" event={"ID":"a4aff954-1afc-4dd4-8935-fa0cc1cebec6","Type":"ContainerDied","Data":"a89c9e7866102c0d11fc1ff52c6b755d767f60318f70798f9185149c4f061e2d"} Jan 26 00:21:12 crc kubenswrapper[5124]: I0126 00:21:12.849505 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:12 crc kubenswrapper[5124]: I0126 00:21:12.942449 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glnm7\" (UniqueName: \"kubernetes.io/projected/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-kube-api-access-glnm7\") pod \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\" (UID: \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\") " Jan 26 00:21:12 crc kubenswrapper[5124]: I0126 00:21:12.942517 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-bundle\") pod \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\" (UID: \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\") " Jan 26 00:21:12 crc kubenswrapper[5124]: I0126 00:21:12.942538 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-util\") pod \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\" (UID: \"a4aff954-1afc-4dd4-8935-fa0cc1cebec6\") " Jan 26 00:21:12 crc kubenswrapper[5124]: I0126 00:21:12.943219 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-util" (OuterVolumeSpecName: "util") pod "a4aff954-1afc-4dd4-8935-fa0cc1cebec6" (UID: "a4aff954-1afc-4dd4-8935-fa0cc1cebec6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:12 crc kubenswrapper[5124]: I0126 00:21:12.946944 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-bundle" (OuterVolumeSpecName: "bundle") pod "a4aff954-1afc-4dd4-8935-fa0cc1cebec6" (UID: "a4aff954-1afc-4dd4-8935-fa0cc1cebec6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:12 crc kubenswrapper[5124]: I0126 00:21:12.962124 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-kube-api-access-glnm7" (OuterVolumeSpecName: "kube-api-access-glnm7") pod "a4aff954-1afc-4dd4-8935-fa0cc1cebec6" (UID: "a4aff954-1afc-4dd4-8935-fa0cc1cebec6"). InnerVolumeSpecName "kube-api-access-glnm7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:21:13 crc kubenswrapper[5124]: I0126 00:21:13.045667 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-glnm7\" (UniqueName: \"kubernetes.io/projected/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-kube-api-access-glnm7\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:13 crc kubenswrapper[5124]: I0126 00:21:13.045718 5124 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:13 crc kubenswrapper[5124]: I0126 00:21:13.045734 5124 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4aff954-1afc-4dd4-8935-fa0cc1cebec6-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:13 crc kubenswrapper[5124]: I0126 00:21:13.342310 5124 generic.go:358] "Generic (PLEG): container finished" podID="e5511fa1-897e-4657-a92d-e3db672371f1" containerID="8af9f167d994edf5aad0eb52e8af4ae7a9b4e90e71b66a3f70d893f11804d1c3" exitCode=0 Jan 26 00:21:13 crc kubenswrapper[5124]: I0126 00:21:13.342495 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9w7l" event={"ID":"e5511fa1-897e-4657-a92d-e3db672371f1","Type":"ContainerDied","Data":"8af9f167d994edf5aad0eb52e8af4ae7a9b4e90e71b66a3f70d893f11804d1c3"} Jan 26 00:21:13 crc kubenswrapper[5124]: I0126 00:21:13.345937 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" event={"ID":"a4aff954-1afc-4dd4-8935-fa0cc1cebec6","Type":"ContainerDied","Data":"7424175a8869cc5a42776847b1eff00b967328dd8317f43036b7c6dc54b15b33"} Jan 26 00:21:13 crc kubenswrapper[5124]: I0126 00:21:13.345972 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7424175a8869cc5a42776847b1eff00b967328dd8317f43036b7c6dc54b15b33" Jan 26 00:21:13 crc kubenswrapper[5124]: I0126 00:21:13.346053 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q" Jan 26 00:21:14 crc kubenswrapper[5124]: I0126 00:21:14.355963 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9w7l" event={"ID":"e5511fa1-897e-4657-a92d-e3db672371f1","Type":"ContainerStarted","Data":"36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98"} Jan 26 00:21:14 crc kubenswrapper[5124]: I0126 00:21:14.384416 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r9w7l" podStartSLOduration=4.609008679 podStartE2EDuration="5.384382582s" podCreationTimestamp="2026-01-26 00:21:09 +0000 UTC" firstStartedPulling="2026-01-26 00:21:10.310744981 +0000 UTC m=+748.219664330" lastFinishedPulling="2026-01-26 00:21:11.086118874 +0000 UTC m=+748.995038233" observedRunningTime="2026-01-26 00:21:14.381190577 +0000 UTC m=+752.290109986" watchObservedRunningTime="2026-01-26 00:21:14.384382582 +0000 UTC m=+752.293301971" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.734514 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f"] Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.735222 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4aff954-1afc-4dd4-8935-fa0cc1cebec6" containerName="util" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.735244 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aff954-1afc-4dd4-8935-fa0cc1cebec6" containerName="util" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.735285 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4aff954-1afc-4dd4-8935-fa0cc1cebec6" containerName="pull" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.735293 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aff954-1afc-4dd4-8935-fa0cc1cebec6" containerName="pull" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.735303 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4aff954-1afc-4dd4-8935-fa0cc1cebec6" containerName="extract" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.735310 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aff954-1afc-4dd4-8935-fa0cc1cebec6" containerName="extract" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.735422 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4aff954-1afc-4dd4-8935-fa0cc1cebec6" containerName="extract" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.771556 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f"] Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.771823 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.775568 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.885732 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f25mn\" (UniqueName: \"kubernetes.io/projected/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-kube-api-access-f25mn\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f\" (UID: \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.885827 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f\" (UID: \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.885914 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f\" (UID: \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.987144 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f25mn\" (UniqueName: \"kubernetes.io/projected/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-kube-api-access-f25mn\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f\" (UID: \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.987208 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f\" (UID: \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.987231 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f\" (UID: \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.987998 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f\" (UID: \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:15 crc kubenswrapper[5124]: I0126 00:21:15.988709 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f\" (UID: \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:16 crc kubenswrapper[5124]: I0126 00:21:16.007557 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f25mn\" (UniqueName: \"kubernetes.io/projected/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-kube-api-access-f25mn\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f\" (UID: \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:16 crc kubenswrapper[5124]: I0126 00:21:16.091044 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:16 crc kubenswrapper[5124]: I0126 00:21:16.512224 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f"] Jan 26 00:21:16 crc kubenswrapper[5124]: W0126 00:21:16.519578 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d1d6fa1_6660_4ff0_8195_3fb90ec72e2c.slice/crio-bf0e43e1bfe4cb7742392ea209543eb270be0a4775673153ff9de6dfe0bef85b WatchSource:0}: Error finding container bf0e43e1bfe4cb7742392ea209543eb270be0a4775673153ff9de6dfe0bef85b: Status 404 returned error can't find the container with id bf0e43e1bfe4cb7742392ea209543eb270be0a4775673153ff9de6dfe0bef85b Jan 26 00:21:16 crc kubenswrapper[5124]: I0126 00:21:16.736094 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2"] Jan 26 00:21:16 crc kubenswrapper[5124]: I0126 00:21:16.785751 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2"] Jan 26 00:21:16 crc kubenswrapper[5124]: I0126 00:21:16.786122 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:16 crc kubenswrapper[5124]: I0126 00:21:16.899757 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dgnk\" (UniqueName: \"kubernetes.io/projected/b03960d1-482f-4b9d-a654-3a8a185334e9-kube-api-access-6dgnk\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2\" (UID: \"b03960d1-482f-4b9d-a654-3a8a185334e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:16 crc kubenswrapper[5124]: I0126 00:21:16.899814 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b03960d1-482f-4b9d-a654-3a8a185334e9-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2\" (UID: \"b03960d1-482f-4b9d-a654-3a8a185334e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:16 crc kubenswrapper[5124]: I0126 00:21:16.899878 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b03960d1-482f-4b9d-a654-3a8a185334e9-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2\" (UID: \"b03960d1-482f-4b9d-a654-3a8a185334e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:17 crc kubenswrapper[5124]: I0126 00:21:17.001378 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6dgnk\" (UniqueName: \"kubernetes.io/projected/b03960d1-482f-4b9d-a654-3a8a185334e9-kube-api-access-6dgnk\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2\" (UID: \"b03960d1-482f-4b9d-a654-3a8a185334e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:17 crc kubenswrapper[5124]: I0126 00:21:17.001427 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b03960d1-482f-4b9d-a654-3a8a185334e9-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2\" (UID: \"b03960d1-482f-4b9d-a654-3a8a185334e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:17 crc kubenswrapper[5124]: I0126 00:21:17.001453 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b03960d1-482f-4b9d-a654-3a8a185334e9-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2\" (UID: \"b03960d1-482f-4b9d-a654-3a8a185334e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:17 crc kubenswrapper[5124]: I0126 00:21:17.001905 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b03960d1-482f-4b9d-a654-3a8a185334e9-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2\" (UID: \"b03960d1-482f-4b9d-a654-3a8a185334e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:17 crc kubenswrapper[5124]: I0126 00:21:17.002024 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b03960d1-482f-4b9d-a654-3a8a185334e9-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2\" (UID: \"b03960d1-482f-4b9d-a654-3a8a185334e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:17 crc kubenswrapper[5124]: I0126 00:21:17.026133 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dgnk\" (UniqueName: \"kubernetes.io/projected/b03960d1-482f-4b9d-a654-3a8a185334e9-kube-api-access-6dgnk\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2\" (UID: \"b03960d1-482f-4b9d-a654-3a8a185334e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:17 crc kubenswrapper[5124]: I0126 00:21:17.103952 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:17 crc kubenswrapper[5124]: I0126 00:21:17.376141 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" event={"ID":"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c","Type":"ContainerStarted","Data":"2dafe3c2c4b5112573692e8b7d565e1aac8e3d04a515c7fea499383462466ea3"} Jan 26 00:21:17 crc kubenswrapper[5124]: I0126 00:21:17.376569 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" event={"ID":"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c","Type":"ContainerStarted","Data":"bf0e43e1bfe4cb7742392ea209543eb270be0a4775673153ff9de6dfe0bef85b"} Jan 26 00:21:17 crc kubenswrapper[5124]: I0126 00:21:17.561319 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2"] Jan 26 00:21:17 crc kubenswrapper[5124]: W0126 00:21:17.569839 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb03960d1_482f_4b9d_a654_3a8a185334e9.slice/crio-54a6dd30f3064e0f1917647ccb1ca45c53bb0a00bd1b4b19485469e6f894791b WatchSource:0}: Error finding container 54a6dd30f3064e0f1917647ccb1ca45c53bb0a00bd1b4b19485469e6f894791b: Status 404 returned error can't find the container with id 54a6dd30f3064e0f1917647ccb1ca45c53bb0a00bd1b4b19485469e6f894791b Jan 26 00:21:18 crc kubenswrapper[5124]: I0126 00:21:18.381885 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" event={"ID":"b03960d1-482f-4b9d-a654-3a8a185334e9","Type":"ContainerStarted","Data":"ec81ee80a876e239aea4895e540b42954d3fe5de901a2e8bfe24784229763e77"} Jan 26 00:21:18 crc kubenswrapper[5124]: I0126 00:21:18.381923 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" event={"ID":"b03960d1-482f-4b9d-a654-3a8a185334e9","Type":"ContainerStarted","Data":"54a6dd30f3064e0f1917647ccb1ca45c53bb0a00bd1b4b19485469e6f894791b"} Jan 26 00:21:19 crc kubenswrapper[5124]: I0126 00:21:19.388094 5124 generic.go:358] "Generic (PLEG): container finished" podID="3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" containerID="2dafe3c2c4b5112573692e8b7d565e1aac8e3d04a515c7fea499383462466ea3" exitCode=0 Jan 26 00:21:19 crc kubenswrapper[5124]: I0126 00:21:19.388217 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" event={"ID":"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c","Type":"ContainerDied","Data":"2dafe3c2c4b5112573692e8b7d565e1aac8e3d04a515c7fea499383462466ea3"} Jan 26 00:21:19 crc kubenswrapper[5124]: I0126 00:21:19.391221 5124 generic.go:358] "Generic (PLEG): container finished" podID="b03960d1-482f-4b9d-a654-3a8a185334e9" containerID="ec81ee80a876e239aea4895e540b42954d3fe5de901a2e8bfe24784229763e77" exitCode=0 Jan 26 00:21:19 crc kubenswrapper[5124]: I0126 00:21:19.391535 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" event={"ID":"b03960d1-482f-4b9d-a654-3a8a185334e9","Type":"ContainerDied","Data":"ec81ee80a876e239aea4895e540b42954d3fe5de901a2e8bfe24784229763e77"} Jan 26 00:21:19 crc kubenswrapper[5124]: I0126 00:21:19.434952 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:19 crc kubenswrapper[5124]: I0126 00:21:19.435003 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.485490 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-knrnd"] Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.489423 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.501655 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-knrnd"] Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.533881 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r9w7l" podUID="e5511fa1-897e-4657-a92d-e3db672371f1" containerName="registry-server" probeResult="failure" output=< Jan 26 00:21:20 crc kubenswrapper[5124]: timeout: failed to connect service ":50051" within 1s Jan 26 00:21:20 crc kubenswrapper[5124]: > Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.618253 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/645ab611-1524-4317-9de7-9b07f91a7e56-utilities\") pod \"certified-operators-knrnd\" (UID: \"645ab611-1524-4317-9de7-9b07f91a7e56\") " pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.618326 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/645ab611-1524-4317-9de7-9b07f91a7e56-catalog-content\") pod \"certified-operators-knrnd\" (UID: \"645ab611-1524-4317-9de7-9b07f91a7e56\") " pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.618359 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6qnx\" (UniqueName: \"kubernetes.io/projected/645ab611-1524-4317-9de7-9b07f91a7e56-kube-api-access-x6qnx\") pod \"certified-operators-knrnd\" (UID: \"645ab611-1524-4317-9de7-9b07f91a7e56\") " pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.719786 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/645ab611-1524-4317-9de7-9b07f91a7e56-utilities\") pod \"certified-operators-knrnd\" (UID: \"645ab611-1524-4317-9de7-9b07f91a7e56\") " pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.720024 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/645ab611-1524-4317-9de7-9b07f91a7e56-catalog-content\") pod \"certified-operators-knrnd\" (UID: \"645ab611-1524-4317-9de7-9b07f91a7e56\") " pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.720104 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x6qnx\" (UniqueName: \"kubernetes.io/projected/645ab611-1524-4317-9de7-9b07f91a7e56-kube-api-access-x6qnx\") pod \"certified-operators-knrnd\" (UID: \"645ab611-1524-4317-9de7-9b07f91a7e56\") " pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.720659 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/645ab611-1524-4317-9de7-9b07f91a7e56-catalog-content\") pod \"certified-operators-knrnd\" (UID: \"645ab611-1524-4317-9de7-9b07f91a7e56\") " pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.720950 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/645ab611-1524-4317-9de7-9b07f91a7e56-utilities\") pod \"certified-operators-knrnd\" (UID: \"645ab611-1524-4317-9de7-9b07f91a7e56\") " pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.752443 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6qnx\" (UniqueName: \"kubernetes.io/projected/645ab611-1524-4317-9de7-9b07f91a7e56-kube-api-access-x6qnx\") pod \"certified-operators-knrnd\" (UID: \"645ab611-1524-4317-9de7-9b07f91a7e56\") " pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:20 crc kubenswrapper[5124]: I0126 00:21:20.804066 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.402543 5124 generic.go:358] "Generic (PLEG): container finished" podID="3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" containerID="ba80d07be7dcdf63f8e2bfdd4794356d47bee4b95e67f457ee3f7bcaabd4f009" exitCode=0 Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.402570 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" event={"ID":"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c","Type":"ContainerDied","Data":"ba80d07be7dcdf63f8e2bfdd4794356d47bee4b95e67f457ee3f7bcaabd4f009"} Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.551248 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-knrnd"] Jan 26 00:21:21 crc kubenswrapper[5124]: W0126 00:21:21.576741 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod645ab611_1524_4317_9de7_9b07f91a7e56.slice/crio-7e61fea083b2408171c4aca781ead2dfb3495da7b90ef1c269c7b9de8c50d39d WatchSource:0}: Error finding container 7e61fea083b2408171c4aca781ead2dfb3495da7b90ef1c269c7b9de8c50d39d: Status 404 returned error can't find the container with id 7e61fea083b2408171c4aca781ead2dfb3495da7b90ef1c269c7b9de8c50d39d Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.669610 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569"] Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.743465 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569"] Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.743757 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.836433 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569\" (UID: \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.836764 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569\" (UID: \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.837262 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzf4p\" (UniqueName: \"kubernetes.io/projected/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-kube-api-access-pzf4p\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569\" (UID: \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.938442 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pzf4p\" (UniqueName: \"kubernetes.io/projected/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-kube-api-access-pzf4p\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569\" (UID: \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.938544 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569\" (UID: \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.939024 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569\" (UID: \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.939092 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569\" (UID: \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.939360 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569\" (UID: \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:21 crc kubenswrapper[5124]: I0126 00:21:21.966621 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzf4p\" (UniqueName: \"kubernetes.io/projected/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-kube-api-access-pzf4p\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569\" (UID: \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:22 crc kubenswrapper[5124]: I0126 00:21:22.079451 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:22 crc kubenswrapper[5124]: I0126 00:21:22.451645 5124 generic.go:358] "Generic (PLEG): container finished" podID="645ab611-1524-4317-9de7-9b07f91a7e56" containerID="99d628494c9032db7429bccb897b1a8054c4f0967026fbfc85869c552c4f5b54" exitCode=0 Jan 26 00:21:22 crc kubenswrapper[5124]: I0126 00:21:22.451866 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knrnd" event={"ID":"645ab611-1524-4317-9de7-9b07f91a7e56","Type":"ContainerDied","Data":"99d628494c9032db7429bccb897b1a8054c4f0967026fbfc85869c552c4f5b54"} Jan 26 00:21:22 crc kubenswrapper[5124]: I0126 00:21:22.451904 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knrnd" event={"ID":"645ab611-1524-4317-9de7-9b07f91a7e56","Type":"ContainerStarted","Data":"7e61fea083b2408171c4aca781ead2dfb3495da7b90ef1c269c7b9de8c50d39d"} Jan 26 00:21:22 crc kubenswrapper[5124]: I0126 00:21:22.481569 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569"] Jan 26 00:21:22 crc kubenswrapper[5124]: I0126 00:21:22.482326 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" event={"ID":"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c","Type":"ContainerStarted","Data":"c9687997ea850ac3a0d5e603bb9615b0c595083edf719518c212679f8a4a6c27"} Jan 26 00:21:23 crc kubenswrapper[5124]: I0126 00:21:23.491183 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knrnd" event={"ID":"645ab611-1524-4317-9de7-9b07f91a7e56","Type":"ContainerStarted","Data":"dc6c6a8389841ce6a14441175c6d3a7bda376671817716cb9ff71b07e5ca9abc"} Jan 26 00:21:23 crc kubenswrapper[5124]: I0126 00:21:23.494739 5124 generic.go:358] "Generic (PLEG): container finished" podID="3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" containerID="c9687997ea850ac3a0d5e603bb9615b0c595083edf719518c212679f8a4a6c27" exitCode=0 Jan 26 00:21:23 crc kubenswrapper[5124]: I0126 00:21:23.494823 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" event={"ID":"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c","Type":"ContainerDied","Data":"c9687997ea850ac3a0d5e603bb9615b0c595083edf719518c212679f8a4a6c27"} Jan 26 00:21:23 crc kubenswrapper[5124]: I0126 00:21:23.496811 5124 generic.go:358] "Generic (PLEG): container finished" podID="b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" containerID="1b5e594f70b99a11021e8171f3f8e69632c7e15f695f913edee87988ba2a6642" exitCode=0 Jan 26 00:21:23 crc kubenswrapper[5124]: I0126 00:21:23.496858 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" event={"ID":"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4","Type":"ContainerDied","Data":"1b5e594f70b99a11021e8171f3f8e69632c7e15f695f913edee87988ba2a6642"} Jan 26 00:21:23 crc kubenswrapper[5124]: I0126 00:21:23.497049 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" event={"ID":"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4","Type":"ContainerStarted","Data":"1ea652b6cb7cda0cf36d55ee1e827b6a8e07cb190977b2c92f6da106812961da"} Jan 26 00:21:23 crc kubenswrapper[5124]: I0126 00:21:23.500778 5124 generic.go:358] "Generic (PLEG): container finished" podID="b03960d1-482f-4b9d-a654-3a8a185334e9" containerID="b3a3226451ca6f117cf320c3e7abd5c19b2df1feb134a384a87ce6176d25508c" exitCode=0 Jan 26 00:21:23 crc kubenswrapper[5124]: I0126 00:21:23.500853 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" event={"ID":"b03960d1-482f-4b9d-a654-3a8a185334e9","Type":"ContainerDied","Data":"b3a3226451ca6f117cf320c3e7abd5c19b2df1feb134a384a87ce6176d25508c"} Jan 26 00:21:23 crc kubenswrapper[5124]: I0126 00:21:23.516288 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" podStartSLOduration=7.010139405 podStartE2EDuration="8.516266552s" podCreationTimestamp="2026-01-26 00:21:15 +0000 UTC" firstStartedPulling="2026-01-26 00:21:19.390147006 +0000 UTC m=+757.299066355" lastFinishedPulling="2026-01-26 00:21:20.896274153 +0000 UTC m=+758.805193502" observedRunningTime="2026-01-26 00:21:22.520986416 +0000 UTC m=+760.429905775" watchObservedRunningTime="2026-01-26 00:21:23.516266552 +0000 UTC m=+761.425185901" Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.507204 5124 generic.go:358] "Generic (PLEG): container finished" podID="b03960d1-482f-4b9d-a654-3a8a185334e9" containerID="6d26770f645fb649373a8c6fddc90fd027d78488f0504757bc338ba8b8f4508a" exitCode=0 Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.507364 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" event={"ID":"b03960d1-482f-4b9d-a654-3a8a185334e9","Type":"ContainerDied","Data":"6d26770f645fb649373a8c6fddc90fd027d78488f0504757bc338ba8b8f4508a"} Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.509704 5124 generic.go:358] "Generic (PLEG): container finished" podID="645ab611-1524-4317-9de7-9b07f91a7e56" containerID="dc6c6a8389841ce6a14441175c6d3a7bda376671817716cb9ff71b07e5ca9abc" exitCode=0 Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.509773 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knrnd" event={"ID":"645ab611-1524-4317-9de7-9b07f91a7e56","Type":"ContainerDied","Data":"dc6c6a8389841ce6a14441175c6d3a7bda376671817716cb9ff71b07e5ca9abc"} Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.764888 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.821078 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-util\") pod \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\" (UID: \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\") " Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.821304 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f25mn\" (UniqueName: \"kubernetes.io/projected/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-kube-api-access-f25mn\") pod \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\" (UID: \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\") " Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.838805 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-bundle\") pod \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\" (UID: \"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c\") " Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.839766 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-bundle" (OuterVolumeSpecName: "bundle") pod "3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" (UID: "3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.841556 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-kube-api-access-f25mn" (OuterVolumeSpecName: "kube-api-access-f25mn") pod "3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" (UID: "3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c"). InnerVolumeSpecName "kube-api-access-f25mn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.854712 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-util" (OuterVolumeSpecName: "util") pod "3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" (UID: "3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.940978 5124 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.941036 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f25mn\" (UniqueName: \"kubernetes.io/projected/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-kube-api-access-f25mn\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:24 crc kubenswrapper[5124]: I0126 00:21:24.941054 5124 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.519603 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knrnd" event={"ID":"645ab611-1524-4317-9de7-9b07f91a7e56","Type":"ContainerStarted","Data":"b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf"} Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.523377 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" event={"ID":"3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c","Type":"ContainerDied","Data":"bf0e43e1bfe4cb7742392ea209543eb270be0a4775673153ff9de6dfe0bef85b"} Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.523412 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf0e43e1bfe4cb7742392ea209543eb270be0a4775673153ff9de6dfe0bef85b" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.523456 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.554734 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-knrnd" podStartSLOduration=4.960286835 podStartE2EDuration="5.554713249s" podCreationTimestamp="2026-01-26 00:21:20 +0000 UTC" firstStartedPulling="2026-01-26 00:21:22.452668127 +0000 UTC m=+760.361587476" lastFinishedPulling="2026-01-26 00:21:23.047094541 +0000 UTC m=+760.956013890" observedRunningTime="2026-01-26 00:21:25.538855388 +0000 UTC m=+763.447774747" watchObservedRunningTime="2026-01-26 00:21:25.554713249 +0000 UTC m=+763.463632598" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.627694 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-rdc79"] Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.628550 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" containerName="extract" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.628665 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" containerName="extract" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.628729 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" containerName="pull" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.628785 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" containerName="pull" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.628840 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" containerName="util" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.628893 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" containerName="util" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.629128 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c" containerName="extract" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.658858 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-rdc79"] Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.659055 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-rdc79" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.662776 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-nwx65\"" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.662969 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.663847 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.685293 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm"] Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.692485 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.695721 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.695886 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-b626w\"" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.704295 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm"] Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.715625 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp"] Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.719732 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.726493 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp"] Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.767066 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/370cc157-a069-4b36-aee7-98b2607e01c3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm\" (UID: \"370cc157-a069-4b36-aee7-98b2607e01c3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.767401 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88nc5\" (UniqueName: \"kubernetes.io/projected/55489b76-1256-4d20-b6ab-800ea25b615a-kube-api-access-88nc5\") pod \"obo-prometheus-operator-9bc85b4bf-rdc79\" (UID: \"55489b76-1256-4d20-b6ab-800ea25b615a\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-rdc79" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.767520 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/370cc157-a069-4b36-aee7-98b2607e01c3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm\" (UID: \"370cc157-a069-4b36-aee7-98b2607e01c3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.862732 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-dxwvg"] Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.868459 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/370cc157-a069-4b36-aee7-98b2607e01c3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm\" (UID: \"370cc157-a069-4b36-aee7-98b2607e01c3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.868524 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0eb54603-766c-4938-8f12-fcd1c1673213-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp\" (UID: \"0eb54603-766c-4938-8f12-fcd1c1673213\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.868558 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-88nc5\" (UniqueName: \"kubernetes.io/projected/55489b76-1256-4d20-b6ab-800ea25b615a-kube-api-access-88nc5\") pod \"obo-prometheus-operator-9bc85b4bf-rdc79\" (UID: \"55489b76-1256-4d20-b6ab-800ea25b615a\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-rdc79" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.868595 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/370cc157-a069-4b36-aee7-98b2607e01c3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm\" (UID: \"370cc157-a069-4b36-aee7-98b2607e01c3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.868626 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0eb54603-766c-4938-8f12-fcd1c1673213-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp\" (UID: \"0eb54603-766c-4938-8f12-fcd1c1673213\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.872837 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-dxwvg" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.880681 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/370cc157-a069-4b36-aee7-98b2607e01c3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm\" (UID: \"370cc157-a069-4b36-aee7-98b2607e01c3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.880914 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.880699 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/370cc157-a069-4b36-aee7-98b2607e01c3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm\" (UID: \"370cc157-a069-4b36-aee7-98b2607e01c3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.881238 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-tvnqt\"" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.885876 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-dxwvg"] Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.947312 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-88nc5\" (UniqueName: \"kubernetes.io/projected/55489b76-1256-4d20-b6ab-800ea25b615a-kube-api-access-88nc5\") pod \"obo-prometheus-operator-9bc85b4bf-rdc79\" (UID: \"55489b76-1256-4d20-b6ab-800ea25b615a\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-rdc79" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.971406 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/54f9d0ba-a6be-4a87-a44f-80b2bc6c0879-observability-operator-tls\") pod \"observability-operator-85c68dddb-dxwvg\" (UID: \"54f9d0ba-a6be-4a87-a44f-80b2bc6c0879\") " pod="openshift-operators/observability-operator-85c68dddb-dxwvg" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.971517 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0eb54603-766c-4938-8f12-fcd1c1673213-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp\" (UID: \"0eb54603-766c-4938-8f12-fcd1c1673213\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.971551 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7sc8\" (UniqueName: \"kubernetes.io/projected/54f9d0ba-a6be-4a87-a44f-80b2bc6c0879-kube-api-access-q7sc8\") pod \"observability-operator-85c68dddb-dxwvg\" (UID: \"54f9d0ba-a6be-4a87-a44f-80b2bc6c0879\") " pod="openshift-operators/observability-operator-85c68dddb-dxwvg" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.971615 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0eb54603-766c-4938-8f12-fcd1c1673213-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp\" (UID: \"0eb54603-766c-4938-8f12-fcd1c1673213\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.981250 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0eb54603-766c-4938-8f12-fcd1c1673213-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp\" (UID: \"0eb54603-766c-4938-8f12-fcd1c1673213\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.991916 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-rdc79" Jan 26 00:21:25 crc kubenswrapper[5124]: I0126 00:21:25.992213 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0eb54603-766c-4938-8f12-fcd1c1673213-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp\" (UID: \"0eb54603-766c-4938-8f12-fcd1c1673213\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.012221 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.041118 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.072348 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/54f9d0ba-a6be-4a87-a44f-80b2bc6c0879-observability-operator-tls\") pod \"observability-operator-85c68dddb-dxwvg\" (UID: \"54f9d0ba-a6be-4a87-a44f-80b2bc6c0879\") " pod="openshift-operators/observability-operator-85c68dddb-dxwvg" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.072505 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q7sc8\" (UniqueName: \"kubernetes.io/projected/54f9d0ba-a6be-4a87-a44f-80b2bc6c0879-kube-api-access-q7sc8\") pod \"observability-operator-85c68dddb-dxwvg\" (UID: \"54f9d0ba-a6be-4a87-a44f-80b2bc6c0879\") " pod="openshift-operators/observability-operator-85c68dddb-dxwvg" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.077548 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/54f9d0ba-a6be-4a87-a44f-80b2bc6c0879-observability-operator-tls\") pod \"observability-operator-85c68dddb-dxwvg\" (UID: \"54f9d0ba-a6be-4a87-a44f-80b2bc6c0879\") " pod="openshift-operators/observability-operator-85c68dddb-dxwvg" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.098367 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-xbrsv"] Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.102619 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7sc8\" (UniqueName: \"kubernetes.io/projected/54f9d0ba-a6be-4a87-a44f-80b2bc6c0879-kube-api-access-q7sc8\") pod \"observability-operator-85c68dddb-dxwvg\" (UID: \"54f9d0ba-a6be-4a87-a44f-80b2bc6c0879\") " pod="openshift-operators/observability-operator-85c68dddb-dxwvg" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.184337 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-xbrsv"] Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.184475 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.188340 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-gnng2\"" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.243581 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-dxwvg" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.268348 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.276413 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1927088-b361-4e51-ace6-c6029dd3239c-openshift-service-ca\") pod \"perses-operator-669c9f96b5-xbrsv\" (UID: \"f1927088-b361-4e51-ace6-c6029dd3239c\") " pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.276490 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsdmv\" (UniqueName: \"kubernetes.io/projected/f1927088-b361-4e51-ace6-c6029dd3239c-kube-api-access-vsdmv\") pod \"perses-operator-669c9f96b5-xbrsv\" (UID: \"f1927088-b361-4e51-ace6-c6029dd3239c\") " pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.377112 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dgnk\" (UniqueName: \"kubernetes.io/projected/b03960d1-482f-4b9d-a654-3a8a185334e9-kube-api-access-6dgnk\") pod \"b03960d1-482f-4b9d-a654-3a8a185334e9\" (UID: \"b03960d1-482f-4b9d-a654-3a8a185334e9\") " Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.377505 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b03960d1-482f-4b9d-a654-3a8a185334e9-util\") pod \"b03960d1-482f-4b9d-a654-3a8a185334e9\" (UID: \"b03960d1-482f-4b9d-a654-3a8a185334e9\") " Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.377541 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b03960d1-482f-4b9d-a654-3a8a185334e9-bundle\") pod \"b03960d1-482f-4b9d-a654-3a8a185334e9\" (UID: \"b03960d1-482f-4b9d-a654-3a8a185334e9\") " Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.377858 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1927088-b361-4e51-ace6-c6029dd3239c-openshift-service-ca\") pod \"perses-operator-669c9f96b5-xbrsv\" (UID: \"f1927088-b361-4e51-ace6-c6029dd3239c\") " pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.377926 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vsdmv\" (UniqueName: \"kubernetes.io/projected/f1927088-b361-4e51-ace6-c6029dd3239c-kube-api-access-vsdmv\") pod \"perses-operator-669c9f96b5-xbrsv\" (UID: \"f1927088-b361-4e51-ace6-c6029dd3239c\") " pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.380967 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b03960d1-482f-4b9d-a654-3a8a185334e9-bundle" (OuterVolumeSpecName: "bundle") pod "b03960d1-482f-4b9d-a654-3a8a185334e9" (UID: "b03960d1-482f-4b9d-a654-3a8a185334e9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.383016 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1927088-b361-4e51-ace6-c6029dd3239c-openshift-service-ca\") pod \"perses-operator-669c9f96b5-xbrsv\" (UID: \"f1927088-b361-4e51-ace6-c6029dd3239c\") " pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.385838 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03960d1-482f-4b9d-a654-3a8a185334e9-kube-api-access-6dgnk" (OuterVolumeSpecName: "kube-api-access-6dgnk") pod "b03960d1-482f-4b9d-a654-3a8a185334e9" (UID: "b03960d1-482f-4b9d-a654-3a8a185334e9"). InnerVolumeSpecName "kube-api-access-6dgnk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.396998 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b03960d1-482f-4b9d-a654-3a8a185334e9-util" (OuterVolumeSpecName: "util") pod "b03960d1-482f-4b9d-a654-3a8a185334e9" (UID: "b03960d1-482f-4b9d-a654-3a8a185334e9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.413801 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsdmv\" (UniqueName: \"kubernetes.io/projected/f1927088-b361-4e51-ace6-c6029dd3239c-kube-api-access-vsdmv\") pod \"perses-operator-669c9f96b5-xbrsv\" (UID: \"f1927088-b361-4e51-ace6-c6029dd3239c\") " pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.478827 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dgnk\" (UniqueName: \"kubernetes.io/projected/b03960d1-482f-4b9d-a654-3a8a185334e9-kube-api-access-6dgnk\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.478854 5124 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b03960d1-482f-4b9d-a654-3a8a185334e9-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.478863 5124 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b03960d1-482f-4b9d-a654-3a8a185334e9-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.531687 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.543124 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" event={"ID":"b03960d1-482f-4b9d-a654-3a8a185334e9","Type":"ContainerDied","Data":"54a6dd30f3064e0f1917647ccb1ca45c53bb0a00bd1b4b19485469e6f894791b"} Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.543190 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54a6dd30f3064e0f1917647ccb1ca45c53bb0a00bd1b4b19485469e6f894791b" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.543146 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2" Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.578772 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm"] Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.617548 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp"] Jan 26 00:21:26 crc kubenswrapper[5124]: W0126 00:21:26.623797 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0eb54603_766c_4938_8f12_fcd1c1673213.slice/crio-d0fec5af908adfddc46d13fb340f6251de7e87408b34ae64063710a79094722e WatchSource:0}: Error finding container d0fec5af908adfddc46d13fb340f6251de7e87408b34ae64063710a79094722e: Status 404 returned error can't find the container with id d0fec5af908adfddc46d13fb340f6251de7e87408b34ae64063710a79094722e Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.781494 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-xbrsv"] Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.851781 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-rdc79"] Jan 26 00:21:26 crc kubenswrapper[5124]: W0126 00:21:26.855572 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55489b76_1256_4d20_b6ab_800ea25b615a.slice/crio-5fa5892a38815693e573357d53e25a90bad3f638d4bee8e3f5be79dfc4075aef WatchSource:0}: Error finding container 5fa5892a38815693e573357d53e25a90bad3f638d4bee8e3f5be79dfc4075aef: Status 404 returned error can't find the container with id 5fa5892a38815693e573357d53e25a90bad3f638d4bee8e3f5be79dfc4075aef Jan 26 00:21:26 crc kubenswrapper[5124]: I0126 00:21:26.924657 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-dxwvg"] Jan 26 00:21:26 crc kubenswrapper[5124]: W0126 00:21:26.930901 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54f9d0ba_a6be_4a87_a44f_80b2bc6c0879.slice/crio-68cb4d7859a64f1a3e268cae6c0e82d8cca64bf1264bdfd62dbf574d5b8b3bd1 WatchSource:0}: Error finding container 68cb4d7859a64f1a3e268cae6c0e82d8cca64bf1264bdfd62dbf574d5b8b3bd1: Status 404 returned error can't find the container with id 68cb4d7859a64f1a3e268cae6c0e82d8cca64bf1264bdfd62dbf574d5b8b3bd1 Jan 26 00:21:27 crc kubenswrapper[5124]: I0126 00:21:27.556982 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" event={"ID":"f1927088-b361-4e51-ace6-c6029dd3239c","Type":"ContainerStarted","Data":"a6931bf82c424eca5b2b5d62f25efad89ba09e386d4233602b6b31ae23d87e0c"} Jan 26 00:21:27 crc kubenswrapper[5124]: I0126 00:21:27.558464 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-dxwvg" event={"ID":"54f9d0ba-a6be-4a87-a44f-80b2bc6c0879","Type":"ContainerStarted","Data":"68cb4d7859a64f1a3e268cae6c0e82d8cca64bf1264bdfd62dbf574d5b8b3bd1"} Jan 26 00:21:27 crc kubenswrapper[5124]: I0126 00:21:27.560606 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp" event={"ID":"0eb54603-766c-4938-8f12-fcd1c1673213","Type":"ContainerStarted","Data":"d0fec5af908adfddc46d13fb340f6251de7e87408b34ae64063710a79094722e"} Jan 26 00:21:27 crc kubenswrapper[5124]: I0126 00:21:27.562831 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-rdc79" event={"ID":"55489b76-1256-4d20-b6ab-800ea25b615a","Type":"ContainerStarted","Data":"5fa5892a38815693e573357d53e25a90bad3f638d4bee8e3f5be79dfc4075aef"} Jan 26 00:21:27 crc kubenswrapper[5124]: I0126 00:21:27.566012 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm" event={"ID":"370cc157-a069-4b36-aee7-98b2607e01c3","Type":"ContainerStarted","Data":"40f86308bd1ee0f09699c98913c3fa00067935b0a732f024b30987cfa2394130"} Jan 26 00:21:29 crc kubenswrapper[5124]: I0126 00:21:29.534111 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:29 crc kubenswrapper[5124]: I0126 00:21:29.628944 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.591419 5124 generic.go:358] "Generic (PLEG): container finished" podID="b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" containerID="923678b7f172bed8a117390b70bed6e8b9a490d04d4a856cb9c23f4b12916809" exitCode=0 Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.591573 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" event={"ID":"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4","Type":"ContainerDied","Data":"923678b7f172bed8a117390b70bed6e8b9a490d04d4a856cb9c23f4b12916809"} Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.819338 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.819378 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.876291 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-j7rvk"] Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.881229 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b03960d1-482f-4b9d-a654-3a8a185334e9" containerName="extract" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.881265 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03960d1-482f-4b9d-a654-3a8a185334e9" containerName="extract" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.881295 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b03960d1-482f-4b9d-a654-3a8a185334e9" containerName="pull" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.881302 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03960d1-482f-4b9d-a654-3a8a185334e9" containerName="pull" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.881323 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b03960d1-482f-4b9d-a654-3a8a185334e9" containerName="util" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.881331 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03960d1-482f-4b9d-a654-3a8a185334e9" containerName="util" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.881479 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="b03960d1-482f-4b9d-a654-3a8a185334e9" containerName="extract" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.886964 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-j7rvk" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.888429 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-j7rvk"] Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.889602 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-tpxtd\"" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.889914 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.891391 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 26 00:21:30 crc kubenswrapper[5124]: I0126 00:21:30.900889 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:31 crc kubenswrapper[5124]: I0126 00:21:31.070335 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf26s\" (UniqueName: \"kubernetes.io/projected/195d787d-dafa-48af-85cf-bfddcb46604b-kube-api-access-mf26s\") pod \"interconnect-operator-78b9bd8798-j7rvk\" (UID: \"195d787d-dafa-48af-85cf-bfddcb46604b\") " pod="service-telemetry/interconnect-operator-78b9bd8798-j7rvk" Jan 26 00:21:31 crc kubenswrapper[5124]: I0126 00:21:31.171804 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mf26s\" (UniqueName: \"kubernetes.io/projected/195d787d-dafa-48af-85cf-bfddcb46604b-kube-api-access-mf26s\") pod \"interconnect-operator-78b9bd8798-j7rvk\" (UID: \"195d787d-dafa-48af-85cf-bfddcb46604b\") " pod="service-telemetry/interconnect-operator-78b9bd8798-j7rvk" Jan 26 00:21:31 crc kubenswrapper[5124]: I0126 00:21:31.231153 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf26s\" (UniqueName: \"kubernetes.io/projected/195d787d-dafa-48af-85cf-bfddcb46604b-kube-api-access-mf26s\") pod \"interconnect-operator-78b9bd8798-j7rvk\" (UID: \"195d787d-dafa-48af-85cf-bfddcb46604b\") " pod="service-telemetry/interconnect-operator-78b9bd8798-j7rvk" Jan 26 00:21:31 crc kubenswrapper[5124]: I0126 00:21:31.528471 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-j7rvk" Jan 26 00:21:31 crc kubenswrapper[5124]: I0126 00:21:31.603970 5124 generic.go:358] "Generic (PLEG): container finished" podID="b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" containerID="648d49d08a8d491ab88ffa3f2175de3c08e5709253c2f35dd04c575ce46cd4d0" exitCode=0 Jan 26 00:21:31 crc kubenswrapper[5124]: I0126 00:21:31.605725 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" event={"ID":"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4","Type":"ContainerDied","Data":"648d49d08a8d491ab88ffa3f2175de3c08e5709253c2f35dd04c575ce46cd4d0"} Jan 26 00:21:31 crc kubenswrapper[5124]: I0126 00:21:31.681619 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9w7l"] Jan 26 00:21:31 crc kubenswrapper[5124]: I0126 00:21:31.682077 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r9w7l" podUID="e5511fa1-897e-4657-a92d-e3db672371f1" containerName="registry-server" containerID="cri-o://36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98" gracePeriod=2 Jan 26 00:21:31 crc kubenswrapper[5124]: I0126 00:21:31.695105 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:31 crc kubenswrapper[5124]: I0126 00:21:31.965556 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-j7rvk"] Jan 26 00:21:31 crc kubenswrapper[5124]: W0126 00:21:31.973525 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod195d787d_dafa_48af_85cf_bfddcb46604b.slice/crio-c211b17564db853244bba3b7132d7860f81ae60198f7b20e1014dcaec78f0772 WatchSource:0}: Error finding container c211b17564db853244bba3b7132d7860f81ae60198f7b20e1014dcaec78f0772: Status 404 returned error can't find the container with id c211b17564db853244bba3b7132d7860f81ae60198f7b20e1014dcaec78f0772 Jan 26 00:21:32 crc kubenswrapper[5124]: I0126 00:21:32.615918 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-j7rvk" event={"ID":"195d787d-dafa-48af-85cf-bfddcb46604b","Type":"ContainerStarted","Data":"c211b17564db853244bba3b7132d7860f81ae60198f7b20e1014dcaec78f0772"} Jan 26 00:21:32 crc kubenswrapper[5124]: I0126 00:21:32.689973 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-knrnd"] Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.018366 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.053062 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-util\") pod \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\" (UID: \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\") " Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.053250 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzf4p\" (UniqueName: \"kubernetes.io/projected/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-kube-api-access-pzf4p\") pod \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\" (UID: \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\") " Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.053275 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-bundle\") pod \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\" (UID: \"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4\") " Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.054542 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-bundle" (OuterVolumeSpecName: "bundle") pod "b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" (UID: "b87ef7de-04b2-4f6e-a380-8f3fc72b51d4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.061383 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-util" (OuterVolumeSpecName: "util") pod "b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" (UID: "b87ef7de-04b2-4f6e-a380-8f3fc72b51d4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.065233 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-kube-api-access-pzf4p" (OuterVolumeSpecName: "kube-api-access-pzf4p") pod "b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" (UID: "b87ef7de-04b2-4f6e-a380-8f3fc72b51d4"). InnerVolumeSpecName "kube-api-access-pzf4p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.155242 5124 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.155277 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pzf4p\" (UniqueName: \"kubernetes.io/projected/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-kube-api-access-pzf4p\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.155288 5124 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b87ef7de-04b2-4f6e-a380-8f3fc72b51d4-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.297713 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.358553 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5511fa1-897e-4657-a92d-e3db672371f1-utilities\") pod \"e5511fa1-897e-4657-a92d-e3db672371f1\" (UID: \"e5511fa1-897e-4657-a92d-e3db672371f1\") " Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.358698 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcfv6\" (UniqueName: \"kubernetes.io/projected/e5511fa1-897e-4657-a92d-e3db672371f1-kube-api-access-tcfv6\") pod \"e5511fa1-897e-4657-a92d-e3db672371f1\" (UID: \"e5511fa1-897e-4657-a92d-e3db672371f1\") " Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.358736 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5511fa1-897e-4657-a92d-e3db672371f1-catalog-content\") pod \"e5511fa1-897e-4657-a92d-e3db672371f1\" (UID: \"e5511fa1-897e-4657-a92d-e3db672371f1\") " Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.359736 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5511fa1-897e-4657-a92d-e3db672371f1-utilities" (OuterVolumeSpecName: "utilities") pod "e5511fa1-897e-4657-a92d-e3db672371f1" (UID: "e5511fa1-897e-4657-a92d-e3db672371f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.365605 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5511fa1-897e-4657-a92d-e3db672371f1-kube-api-access-tcfv6" (OuterVolumeSpecName: "kube-api-access-tcfv6") pod "e5511fa1-897e-4657-a92d-e3db672371f1" (UID: "e5511fa1-897e-4657-a92d-e3db672371f1"). InnerVolumeSpecName "kube-api-access-tcfv6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.462687 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tcfv6\" (UniqueName: \"kubernetes.io/projected/e5511fa1-897e-4657-a92d-e3db672371f1-kube-api-access-tcfv6\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.462726 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5511fa1-897e-4657-a92d-e3db672371f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.468731 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5511fa1-897e-4657-a92d-e3db672371f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5511fa1-897e-4657-a92d-e3db672371f1" (UID: "e5511fa1-897e-4657-a92d-e3db672371f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544057 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-677665fb78-bjrnf"] Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544638 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" containerName="pull" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544655 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" containerName="pull" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544674 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5511fa1-897e-4657-a92d-e3db672371f1" containerName="extract-content" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544681 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5511fa1-897e-4657-a92d-e3db672371f1" containerName="extract-content" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544692 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5511fa1-897e-4657-a92d-e3db672371f1" containerName="extract-utilities" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544699 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5511fa1-897e-4657-a92d-e3db672371f1" containerName="extract-utilities" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544708 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5511fa1-897e-4657-a92d-e3db672371f1" containerName="registry-server" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544713 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5511fa1-897e-4657-a92d-e3db672371f1" containerName="registry-server" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544724 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" containerName="extract" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544729 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" containerName="extract" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544744 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" containerName="util" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544750 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" containerName="util" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544832 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="b87ef7de-04b2-4f6e-a380-8f3fc72b51d4" containerName="extract" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.544842 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="e5511fa1-897e-4657-a92d-e3db672371f1" containerName="registry-server" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.563818 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5511fa1-897e-4657-a92d-e3db672371f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.632861 5124 generic.go:358] "Generic (PLEG): container finished" podID="e5511fa1-897e-4657-a92d-e3db672371f1" containerID="36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98" exitCode=0 Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.920719 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9w7l" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.922022 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-677665fb78-bjrnf"] Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.922053 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9w7l" event={"ID":"e5511fa1-897e-4657-a92d-e3db672371f1","Type":"ContainerDied","Data":"36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98"} Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.922106 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9w7l" event={"ID":"e5511fa1-897e-4657-a92d-e3db672371f1","Type":"ContainerDied","Data":"0cbf31407b0f2349406878a7c9bf247152ab2d12b39f7668701a4b527a96680b"} Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.922122 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" event={"ID":"b87ef7de-04b2-4f6e-a380-8f3fc72b51d4","Type":"ContainerDied","Data":"1ea652b6cb7cda0cf36d55ee1e827b6a8e07cb190977b2c92f6da106812961da"} Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.922146 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ea652b6cb7cda0cf36d55ee1e827b6a8e07cb190977b2c92f6da106812961da" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.922167 5124 scope.go:117] "RemoveContainer" containerID="36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.922902 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-677665fb78-bjrnf" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.922982 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-knrnd" podUID="645ab611-1524-4317-9de7-9b07f91a7e56" containerName="registry-server" containerID="cri-o://b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf" gracePeriod=2 Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.923318 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.926356 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.927013 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-m5wmb\"" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.972980 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/39bc5f3f-be0b-4318-858c-35a2bda6a88c-apiservice-cert\") pod \"elastic-operator-677665fb78-bjrnf\" (UID: \"39bc5f3f-be0b-4318-858c-35a2bda6a88c\") " pod="service-telemetry/elastic-operator-677665fb78-bjrnf" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.973043 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97dtp\" (UniqueName: \"kubernetes.io/projected/39bc5f3f-be0b-4318-858c-35a2bda6a88c-kube-api-access-97dtp\") pod \"elastic-operator-677665fb78-bjrnf\" (UID: \"39bc5f3f-be0b-4318-858c-35a2bda6a88c\") " pod="service-telemetry/elastic-operator-677665fb78-bjrnf" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.973104 5124 scope.go:117] "RemoveContainer" containerID="8af9f167d994edf5aad0eb52e8af4ae7a9b4e90e71b66a3f70d893f11804d1c3" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.973210 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39bc5f3f-be0b-4318-858c-35a2bda6a88c-webhook-cert\") pod \"elastic-operator-677665fb78-bjrnf\" (UID: \"39bc5f3f-be0b-4318-858c-35a2bda6a88c\") " pod="service-telemetry/elastic-operator-677665fb78-bjrnf" Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.989348 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9w7l"] Jan 26 00:21:33 crc kubenswrapper[5124]: I0126 00:21:33.995278 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r9w7l"] Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.056219 5124 scope.go:117] "RemoveContainer" containerID="3f2347ff2a380fb1cd6cce1302623f719639b55928c37a31880d150c8ba24948" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.074178 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-97dtp\" (UniqueName: \"kubernetes.io/projected/39bc5f3f-be0b-4318-858c-35a2bda6a88c-kube-api-access-97dtp\") pod \"elastic-operator-677665fb78-bjrnf\" (UID: \"39bc5f3f-be0b-4318-858c-35a2bda6a88c\") " pod="service-telemetry/elastic-operator-677665fb78-bjrnf" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.074234 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39bc5f3f-be0b-4318-858c-35a2bda6a88c-webhook-cert\") pod \"elastic-operator-677665fb78-bjrnf\" (UID: \"39bc5f3f-be0b-4318-858c-35a2bda6a88c\") " pod="service-telemetry/elastic-operator-677665fb78-bjrnf" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.074292 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/39bc5f3f-be0b-4318-858c-35a2bda6a88c-apiservice-cert\") pod \"elastic-operator-677665fb78-bjrnf\" (UID: \"39bc5f3f-be0b-4318-858c-35a2bda6a88c\") " pod="service-telemetry/elastic-operator-677665fb78-bjrnf" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.088633 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/39bc5f3f-be0b-4318-858c-35a2bda6a88c-apiservice-cert\") pod \"elastic-operator-677665fb78-bjrnf\" (UID: \"39bc5f3f-be0b-4318-858c-35a2bda6a88c\") " pod="service-telemetry/elastic-operator-677665fb78-bjrnf" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.088756 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39bc5f3f-be0b-4318-858c-35a2bda6a88c-webhook-cert\") pod \"elastic-operator-677665fb78-bjrnf\" (UID: \"39bc5f3f-be0b-4318-858c-35a2bda6a88c\") " pod="service-telemetry/elastic-operator-677665fb78-bjrnf" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.090821 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-97dtp\" (UniqueName: \"kubernetes.io/projected/39bc5f3f-be0b-4318-858c-35a2bda6a88c-kube-api-access-97dtp\") pod \"elastic-operator-677665fb78-bjrnf\" (UID: \"39bc5f3f-be0b-4318-858c-35a2bda6a88c\") " pod="service-telemetry/elastic-operator-677665fb78-bjrnf" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.123559 5124 scope.go:117] "RemoveContainer" containerID="36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98" Jan 26 00:21:34 crc kubenswrapper[5124]: E0126 00:21:34.124337 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98\": container with ID starting with 36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98 not found: ID does not exist" containerID="36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.124367 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98"} err="failed to get container status \"36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98\": rpc error: code = NotFound desc = could not find container \"36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98\": container with ID starting with 36e286793d11f2f39d6d35c3ed3ebe07453d97740230632100b0fe40e2eecc98 not found: ID does not exist" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.124387 5124 scope.go:117] "RemoveContainer" containerID="8af9f167d994edf5aad0eb52e8af4ae7a9b4e90e71b66a3f70d893f11804d1c3" Jan 26 00:21:34 crc kubenswrapper[5124]: E0126 00:21:34.124916 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af9f167d994edf5aad0eb52e8af4ae7a9b4e90e71b66a3f70d893f11804d1c3\": container with ID starting with 8af9f167d994edf5aad0eb52e8af4ae7a9b4e90e71b66a3f70d893f11804d1c3 not found: ID does not exist" containerID="8af9f167d994edf5aad0eb52e8af4ae7a9b4e90e71b66a3f70d893f11804d1c3" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.124937 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af9f167d994edf5aad0eb52e8af4ae7a9b4e90e71b66a3f70d893f11804d1c3"} err="failed to get container status \"8af9f167d994edf5aad0eb52e8af4ae7a9b4e90e71b66a3f70d893f11804d1c3\": rpc error: code = NotFound desc = could not find container \"8af9f167d994edf5aad0eb52e8af4ae7a9b4e90e71b66a3f70d893f11804d1c3\": container with ID starting with 8af9f167d994edf5aad0eb52e8af4ae7a9b4e90e71b66a3f70d893f11804d1c3 not found: ID does not exist" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.124949 5124 scope.go:117] "RemoveContainer" containerID="3f2347ff2a380fb1cd6cce1302623f719639b55928c37a31880d150c8ba24948" Jan 26 00:21:34 crc kubenswrapper[5124]: E0126 00:21:34.126004 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2347ff2a380fb1cd6cce1302623f719639b55928c37a31880d150c8ba24948\": container with ID starting with 3f2347ff2a380fb1cd6cce1302623f719639b55928c37a31880d150c8ba24948 not found: ID does not exist" containerID="3f2347ff2a380fb1cd6cce1302623f719639b55928c37a31880d150c8ba24948" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.126027 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2347ff2a380fb1cd6cce1302623f719639b55928c37a31880d150c8ba24948"} err="failed to get container status \"3f2347ff2a380fb1cd6cce1302623f719639b55928c37a31880d150c8ba24948\": rpc error: code = NotFound desc = could not find container \"3f2347ff2a380fb1cd6cce1302623f719639b55928c37a31880d150c8ba24948\": container with ID starting with 3f2347ff2a380fb1cd6cce1302623f719639b55928c37a31880d150c8ba24948 not found: ID does not exist" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.281011 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-677665fb78-bjrnf" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.350641 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.376032 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5511fa1-897e-4657-a92d-e3db672371f1" path="/var/lib/kubelet/pods/e5511fa1-897e-4657-a92d-e3db672371f1/volumes" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.505843 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/645ab611-1524-4317-9de7-9b07f91a7e56-catalog-content\") pod \"645ab611-1524-4317-9de7-9b07f91a7e56\" (UID: \"645ab611-1524-4317-9de7-9b07f91a7e56\") " Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.505936 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6qnx\" (UniqueName: \"kubernetes.io/projected/645ab611-1524-4317-9de7-9b07f91a7e56-kube-api-access-x6qnx\") pod \"645ab611-1524-4317-9de7-9b07f91a7e56\" (UID: \"645ab611-1524-4317-9de7-9b07f91a7e56\") " Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.506024 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/645ab611-1524-4317-9de7-9b07f91a7e56-utilities\") pod \"645ab611-1524-4317-9de7-9b07f91a7e56\" (UID: \"645ab611-1524-4317-9de7-9b07f91a7e56\") " Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.518716 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645ab611-1524-4317-9de7-9b07f91a7e56-kube-api-access-x6qnx" (OuterVolumeSpecName: "kube-api-access-x6qnx") pod "645ab611-1524-4317-9de7-9b07f91a7e56" (UID: "645ab611-1524-4317-9de7-9b07f91a7e56"). InnerVolumeSpecName "kube-api-access-x6qnx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.522435 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/645ab611-1524-4317-9de7-9b07f91a7e56-utilities" (OuterVolumeSpecName: "utilities") pod "645ab611-1524-4317-9de7-9b07f91a7e56" (UID: "645ab611-1524-4317-9de7-9b07f91a7e56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.579549 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/645ab611-1524-4317-9de7-9b07f91a7e56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "645ab611-1524-4317-9de7-9b07f91a7e56" (UID: "645ab611-1524-4317-9de7-9b07f91a7e56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.606808 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/645ab611-1524-4317-9de7-9b07f91a7e56-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.606840 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/645ab611-1524-4317-9de7-9b07f91a7e56-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.606851 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x6qnx\" (UniqueName: \"kubernetes.io/projected/645ab611-1524-4317-9de7-9b07f91a7e56-kube-api-access-x6qnx\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.627334 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-677665fb78-bjrnf"] Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.658001 5124 generic.go:358] "Generic (PLEG): container finished" podID="645ab611-1524-4317-9de7-9b07f91a7e56" containerID="b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf" exitCode=0 Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.658161 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knrnd" event={"ID":"645ab611-1524-4317-9de7-9b07f91a7e56","Type":"ContainerDied","Data":"b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf"} Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.658191 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knrnd" event={"ID":"645ab611-1524-4317-9de7-9b07f91a7e56","Type":"ContainerDied","Data":"7e61fea083b2408171c4aca781ead2dfb3495da7b90ef1c269c7b9de8c50d39d"} Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.658210 5124 scope.go:117] "RemoveContainer" containerID="b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.658321 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knrnd" Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.690904 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-knrnd"] Jan 26 00:21:34 crc kubenswrapper[5124]: I0126 00:21:34.694615 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-knrnd"] Jan 26 00:21:36 crc kubenswrapper[5124]: I0126 00:21:36.381561 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="645ab611-1524-4317-9de7-9b07f91a7e56" path="/var/lib/kubelet/pods/645ab611-1524-4317-9de7-9b07f91a7e56/volumes" Jan 26 00:21:36 crc kubenswrapper[5124]: I0126 00:21:36.685843 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-677665fb78-bjrnf" event={"ID":"39bc5f3f-be0b-4318-858c-35a2bda6a88c","Type":"ContainerStarted","Data":"99da84101de8a0acbe0464672df664bb50e950aa0ee5978aa4e024cf8dab0641"} Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.081603 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r"] Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.082392 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="645ab611-1524-4317-9de7-9b07f91a7e56" containerName="extract-content" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.082404 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="645ab611-1524-4317-9de7-9b07f91a7e56" containerName="extract-content" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.082418 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="645ab611-1524-4317-9de7-9b07f91a7e56" containerName="registry-server" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.082423 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="645ab611-1524-4317-9de7-9b07f91a7e56" containerName="registry-server" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.082444 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="645ab611-1524-4317-9de7-9b07f91a7e56" containerName="extract-utilities" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.082450 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="645ab611-1524-4317-9de7-9b07f91a7e56" containerName="extract-utilities" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.082544 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="645ab611-1524-4317-9de7-9b07f91a7e56" containerName="registry-server" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.107955 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.110575 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.110766 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.110885 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-k7sxv\"" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.128746 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r"] Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.172892 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spjgs\" (UniqueName: \"kubernetes.io/projected/7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1-kube-api-access-spjgs\") pod \"cert-manager-operator-controller-manager-64c74584c4-4fx8r\" (UID: \"7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.172952 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-4fx8r\" (UID: \"7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.274001 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-spjgs\" (UniqueName: \"kubernetes.io/projected/7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1-kube-api-access-spjgs\") pod \"cert-manager-operator-controller-manager-64c74584c4-4fx8r\" (UID: \"7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.274073 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-4fx8r\" (UID: \"7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.274529 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-4fx8r\" (UID: \"7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.297181 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-spjgs\" (UniqueName: \"kubernetes.io/projected/7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1-kube-api-access-spjgs\") pod \"cert-manager-operator-controller-manager-64c74584c4-4fx8r\" (UID: \"7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r" Jan 26 00:21:45 crc kubenswrapper[5124]: I0126 00:21:45.424079 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r" Jan 26 00:21:48 crc kubenswrapper[5124]: I0126 00:21:48.611678 5124 scope.go:117] "RemoveContainer" containerID="dc6c6a8389841ce6a14441175c6d3a7bda376671817716cb9ff71b07e5ca9abc" Jan 26 00:21:50 crc kubenswrapper[5124]: I0126 00:21:50.575787 5124 scope.go:117] "RemoveContainer" containerID="99d628494c9032db7429bccb897b1a8054c4f0967026fbfc85869c552c4f5b54" Jan 26 00:21:50 crc kubenswrapper[5124]: I0126 00:21:50.718711 5124 scope.go:117] "RemoveContainer" containerID="b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf" Jan 26 00:21:50 crc kubenswrapper[5124]: E0126 00:21:50.724003 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf\": container with ID starting with b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf not found: ID does not exist" containerID="b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf" Jan 26 00:21:50 crc kubenswrapper[5124]: I0126 00:21:50.724048 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf"} err="failed to get container status \"b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf\": rpc error: code = NotFound desc = could not find container \"b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf\": container with ID starting with b934b89d00aa449e79c5159eb93bd215ea2e10ad387d095cfbfc8aa77e5f5bbf not found: ID does not exist" Jan 26 00:21:50 crc kubenswrapper[5124]: I0126 00:21:50.724074 5124 scope.go:117] "RemoveContainer" containerID="dc6c6a8389841ce6a14441175c6d3a7bda376671817716cb9ff71b07e5ca9abc" Jan 26 00:21:50 crc kubenswrapper[5124]: E0126 00:21:50.724855 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc6c6a8389841ce6a14441175c6d3a7bda376671817716cb9ff71b07e5ca9abc\": container with ID starting with dc6c6a8389841ce6a14441175c6d3a7bda376671817716cb9ff71b07e5ca9abc not found: ID does not exist" containerID="dc6c6a8389841ce6a14441175c6d3a7bda376671817716cb9ff71b07e5ca9abc" Jan 26 00:21:50 crc kubenswrapper[5124]: I0126 00:21:50.724871 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6c6a8389841ce6a14441175c6d3a7bda376671817716cb9ff71b07e5ca9abc"} err="failed to get container status \"dc6c6a8389841ce6a14441175c6d3a7bda376671817716cb9ff71b07e5ca9abc\": rpc error: code = NotFound desc = could not find container \"dc6c6a8389841ce6a14441175c6d3a7bda376671817716cb9ff71b07e5ca9abc\": container with ID starting with dc6c6a8389841ce6a14441175c6d3a7bda376671817716cb9ff71b07e5ca9abc not found: ID does not exist" Jan 26 00:21:50 crc kubenswrapper[5124]: I0126 00:21:50.724882 5124 scope.go:117] "RemoveContainer" containerID="99d628494c9032db7429bccb897b1a8054c4f0967026fbfc85869c552c4f5b54" Jan 26 00:21:50 crc kubenswrapper[5124]: E0126 00:21:50.725373 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99d628494c9032db7429bccb897b1a8054c4f0967026fbfc85869c552c4f5b54\": container with ID starting with 99d628494c9032db7429bccb897b1a8054c4f0967026fbfc85869c552c4f5b54 not found: ID does not exist" containerID="99d628494c9032db7429bccb897b1a8054c4f0967026fbfc85869c552c4f5b54" Jan 26 00:21:50 crc kubenswrapper[5124]: I0126 00:21:50.725405 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99d628494c9032db7429bccb897b1a8054c4f0967026fbfc85869c552c4f5b54"} err="failed to get container status \"99d628494c9032db7429bccb897b1a8054c4f0967026fbfc85869c552c4f5b54\": rpc error: code = NotFound desc = could not find container \"99d628494c9032db7429bccb897b1a8054c4f0967026fbfc85869c552c4f5b54\": container with ID starting with 99d628494c9032db7429bccb897b1a8054c4f0967026fbfc85869c552c4f5b54 not found: ID does not exist" Jan 26 00:21:50 crc kubenswrapper[5124]: I0126 00:21:50.892638 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r"] Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.809180 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-rdc79" event={"ID":"55489b76-1256-4d20-b6ab-800ea25b615a","Type":"ContainerStarted","Data":"263bd2a1c80cc908f955bf49cc3a5881ce79cfdef793b613cd5f71d95710bbbe"} Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.823731 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-rdc79" podStartSLOduration=4.057028484 podStartE2EDuration="26.823711702s" podCreationTimestamp="2026-01-26 00:21:25 +0000 UTC" firstStartedPulling="2026-01-26 00:21:26.858183151 +0000 UTC m=+764.767102510" lastFinishedPulling="2026-01-26 00:21:49.624866379 +0000 UTC m=+787.533785728" observedRunningTime="2026-01-26 00:21:51.82292119 +0000 UTC m=+789.731840559" watchObservedRunningTime="2026-01-26 00:21:51.823711702 +0000 UTC m=+789.732631051" Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.829638 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-677665fb78-bjrnf" event={"ID":"39bc5f3f-be0b-4318-858c-35a2bda6a88c","Type":"ContainerStarted","Data":"bc965dbe67af76a79fbae1c68a2e52cb57eaa340e047ef774c2cdaecf2e59e7a"} Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.831745 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm" event={"ID":"370cc157-a069-4b36-aee7-98b2607e01c3","Type":"ContainerStarted","Data":"6d53d98405fd18abb84c9ca02e42addec004b6139f53f4ad190a9fc79a62ff8f"} Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.832697 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" event={"ID":"f1927088-b361-4e51-ace6-c6029dd3239c","Type":"ContainerStarted","Data":"b9d7ef6a91a1cd21ec517833299e8709ab259fac98107a278dfe49da7cf2db47"} Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.832811 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.836303 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r" event={"ID":"7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1","Type":"ContainerStarted","Data":"09cd7ec7b0a8273fd3ce1799303f275fa6ac6e716535c35385fb056cd8e50c41"} Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.847785 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-dxwvg" event={"ID":"54f9d0ba-a6be-4a87-a44f-80b2bc6c0879","Type":"ContainerStarted","Data":"65e7c71d7037157556c34722597168ecb7f504d81b8a0ec709994f75ef2a224b"} Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.848002 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-dxwvg" Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.850233 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp" event={"ID":"0eb54603-766c-4938-8f12-fcd1c1673213","Type":"ContainerStarted","Data":"3c437331ef46807fd8fde7af613f7bb04bc1267e367f71c0cc4666e5d9f9d669"} Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.864894 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-677665fb78-bjrnf" podStartSLOduration=4.758949761 podStartE2EDuration="18.864875392s" podCreationTimestamp="2026-01-26 00:21:33 +0000 UTC" firstStartedPulling="2026-01-26 00:21:36.626805882 +0000 UTC m=+774.535725231" lastFinishedPulling="2026-01-26 00:21:50.732731513 +0000 UTC m=+788.641650862" observedRunningTime="2026-01-26 00:21:51.850041202 +0000 UTC m=+789.758960551" watchObservedRunningTime="2026-01-26 00:21:51.864875392 +0000 UTC m=+789.773794741" Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.867556 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-j7rvk" event={"ID":"195d787d-dafa-48af-85cf-bfddcb46604b","Type":"ContainerStarted","Data":"7202606e2e247b1fa033ad10b4ad04502539fe2e8a9870bd25de093e318c195c"} Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.871856 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-dxwvg" Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.878221 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-dxwvg" podStartSLOduration=3.23350011 podStartE2EDuration="26.878199631s" podCreationTimestamp="2026-01-26 00:21:25 +0000 UTC" firstStartedPulling="2026-01-26 00:21:26.932894139 +0000 UTC m=+764.841813488" lastFinishedPulling="2026-01-26 00:21:50.57759366 +0000 UTC m=+788.486513009" observedRunningTime="2026-01-26 00:21:51.874969124 +0000 UTC m=+789.783888493" watchObservedRunningTime="2026-01-26 00:21:51.878199631 +0000 UTC m=+789.787118990" Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.898642 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" podStartSLOduration=3.080449477 podStartE2EDuration="25.898619392s" podCreationTimestamp="2026-01-26 00:21:26 +0000 UTC" firstStartedPulling="2026-01-26 00:21:26.792939063 +0000 UTC m=+764.701858412" lastFinishedPulling="2026-01-26 00:21:49.611108978 +0000 UTC m=+787.520028327" observedRunningTime="2026-01-26 00:21:51.891838599 +0000 UTC m=+789.800757958" watchObservedRunningTime="2026-01-26 00:21:51.898619392 +0000 UTC m=+789.807538741" Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.923414 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm" podStartSLOduration=4.906465328 podStartE2EDuration="26.92339694s" podCreationTimestamp="2026-01-26 00:21:25 +0000 UTC" firstStartedPulling="2026-01-26 00:21:26.605851414 +0000 UTC m=+764.514770763" lastFinishedPulling="2026-01-26 00:21:48.622783026 +0000 UTC m=+786.531702375" observedRunningTime="2026-01-26 00:21:51.920515102 +0000 UTC m=+789.829434451" watchObservedRunningTime="2026-01-26 00:21:51.92339694 +0000 UTC m=+789.832316289" Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.949002 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-j7rvk" podStartSLOduration=3.199809647 podStartE2EDuration="21.94897663s" podCreationTimestamp="2026-01-26 00:21:30 +0000 UTC" firstStartedPulling="2026-01-26 00:21:31.982561583 +0000 UTC m=+769.891480932" lastFinishedPulling="2026-01-26 00:21:50.731728566 +0000 UTC m=+788.640647915" observedRunningTime="2026-01-26 00:21:51.944002876 +0000 UTC m=+789.852922225" watchObservedRunningTime="2026-01-26 00:21:51.94897663 +0000 UTC m=+789.857895979" Jan 26 00:21:51 crc kubenswrapper[5124]: I0126 00:21:51.997468 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp" podStartSLOduration=3.999498452 podStartE2EDuration="26.997449937s" podCreationTimestamp="2026-01-26 00:21:25 +0000 UTC" firstStartedPulling="2026-01-26 00:21:26.627011136 +0000 UTC m=+764.535930485" lastFinishedPulling="2026-01-26 00:21:49.624962621 +0000 UTC m=+787.533881970" observedRunningTime="2026-01-26 00:21:51.991872077 +0000 UTC m=+789.900791436" watchObservedRunningTime="2026-01-26 00:21:51.997449937 +0000 UTC m=+789.906369276" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.395834 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.419847 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.429009 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.429236 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.429349 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.429607 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.429792 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.437510 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.440929 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.441300 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.444824 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.448687 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-s9hgq\"" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518452 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518513 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518553 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518620 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518646 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518674 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518699 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518734 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518764 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518890 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518925 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518957 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.518987 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.519013 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.519042 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.620169 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.620216 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.620241 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.620262 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.620769 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.620822 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.620882 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.620918 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.620994 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.621042 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.621093 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.621186 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.621478 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.621509 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.621538 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.621610 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.621654 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.622176 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.622949 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.623394 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.623779 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.624099 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.624970 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.628137 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.628764 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.629142 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.629782 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.629882 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.630410 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.643242 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/902ad87b-a27d-49e6-a2e3-1e3e274d16d1-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"902ad87b-a27d-49e6-a2e3-1e3e274d16d1\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:52 crc kubenswrapper[5124]: I0126 00:21:52.752008 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:21:53 crc kubenswrapper[5124]: I0126 00:21:53.109389 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:21:53 crc kubenswrapper[5124]: I0126 00:21:53.881321 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"902ad87b-a27d-49e6-a2e3-1e3e274d16d1","Type":"ContainerStarted","Data":"ad1ec51e01ca871a1b87a27710f80aa297516ee2baf021d5c62ab667220b7ed0"} Jan 26 00:22:00 crc kubenswrapper[5124]: I0126 00:22:00.132684 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489782-p756r"] Jan 26 00:22:00 crc kubenswrapper[5124]: I0126 00:22:00.156107 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-p756r"] Jan 26 00:22:00 crc kubenswrapper[5124]: I0126 00:22:00.156276 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-p756r" Jan 26 00:22:00 crc kubenswrapper[5124]: I0126 00:22:00.158799 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-26tfw\"" Jan 26 00:22:00 crc kubenswrapper[5124]: I0126 00:22:00.159068 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:22:00 crc kubenswrapper[5124]: I0126 00:22:00.159852 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:22:00 crc kubenswrapper[5124]: I0126 00:22:00.252573 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq29w\" (UniqueName: \"kubernetes.io/projected/c283d038-4574-4bf6-a5e3-203f888f1367-kube-api-access-bq29w\") pod \"auto-csr-approver-29489782-p756r\" (UID: \"c283d038-4574-4bf6-a5e3-203f888f1367\") " pod="openshift-infra/auto-csr-approver-29489782-p756r" Jan 26 00:22:00 crc kubenswrapper[5124]: I0126 00:22:00.357092 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bq29w\" (UniqueName: \"kubernetes.io/projected/c283d038-4574-4bf6-a5e3-203f888f1367-kube-api-access-bq29w\") pod \"auto-csr-approver-29489782-p756r\" (UID: \"c283d038-4574-4bf6-a5e3-203f888f1367\") " pod="openshift-infra/auto-csr-approver-29489782-p756r" Jan 26 00:22:00 crc kubenswrapper[5124]: I0126 00:22:00.383733 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq29w\" (UniqueName: \"kubernetes.io/projected/c283d038-4574-4bf6-a5e3-203f888f1367-kube-api-access-bq29w\") pod \"auto-csr-approver-29489782-p756r\" (UID: \"c283d038-4574-4bf6-a5e3-203f888f1367\") " pod="openshift-infra/auto-csr-approver-29489782-p756r" Jan 26 00:22:00 crc kubenswrapper[5124]: I0126 00:22:00.509289 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-p756r" Jan 26 00:22:02 crc kubenswrapper[5124]: I0126 00:22:02.877725 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-xbrsv" Jan 26 00:22:18 crc kubenswrapper[5124]: I0126 00:22:18.333070 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-p756r"] Jan 26 00:22:18 crc kubenswrapper[5124]: W0126 00:22:18.358923 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc283d038_4574_4bf6_a5e3_203f888f1367.slice/crio-39ce0c4d226e93a8704184c5a0df73aeadf76f57964031032398f96fb893b349 WatchSource:0}: Error finding container 39ce0c4d226e93a8704184c5a0df73aeadf76f57964031032398f96fb893b349: Status 404 returned error can't find the container with id 39ce0c4d226e93a8704184c5a0df73aeadf76f57964031032398f96fb893b349 Jan 26 00:22:19 crc kubenswrapper[5124]: I0126 00:22:19.069471 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r" event={"ID":"7fbd1c45-4148-4f7b-bf5b-20a9e451aeb1","Type":"ContainerStarted","Data":"febfc7653ce7ef967266beaca201d0edafb70285c4fec3ffa29a9f216e488d3f"} Jan 26 00:22:19 crc kubenswrapper[5124]: I0126 00:22:19.071230 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-p756r" event={"ID":"c283d038-4574-4bf6-a5e3-203f888f1367","Type":"ContainerStarted","Data":"39ce0c4d226e93a8704184c5a0df73aeadf76f57964031032398f96fb893b349"} Jan 26 00:22:19 crc kubenswrapper[5124]: I0126 00:22:19.072597 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"902ad87b-a27d-49e6-a2e3-1e3e274d16d1","Type":"ContainerStarted","Data":"5e72ec111a7e23625ae6f12470ed0278550ba3b8f84ccff1b365fda3f6284060"} Jan 26 00:22:19 crc kubenswrapper[5124]: I0126 00:22:19.090352 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-4fx8r" podStartSLOduration=6.859742294 podStartE2EDuration="34.09033463s" podCreationTimestamp="2026-01-26 00:21:45 +0000 UTC" firstStartedPulling="2026-01-26 00:21:50.911773961 +0000 UTC m=+788.820693310" lastFinishedPulling="2026-01-26 00:22:18.142366297 +0000 UTC m=+816.051285646" observedRunningTime="2026-01-26 00:22:19.087566346 +0000 UTC m=+816.996485695" watchObservedRunningTime="2026-01-26 00:22:19.09033463 +0000 UTC m=+816.999253979" Jan 26 00:22:19 crc kubenswrapper[5124]: I0126 00:22:19.197320 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:22:19 crc kubenswrapper[5124]: I0126 00:22:19.232767 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:22:20 crc kubenswrapper[5124]: I0126 00:22:20.081665 5124 generic.go:358] "Generic (PLEG): container finished" podID="902ad87b-a27d-49e6-a2e3-1e3e274d16d1" containerID="5e72ec111a7e23625ae6f12470ed0278550ba3b8f84ccff1b365fda3f6284060" exitCode=0 Jan 26 00:22:20 crc kubenswrapper[5124]: I0126 00:22:20.081731 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"902ad87b-a27d-49e6-a2e3-1e3e274d16d1","Type":"ContainerDied","Data":"5e72ec111a7e23625ae6f12470ed0278550ba3b8f84ccff1b365fda3f6284060"} Jan 26 00:22:20 crc kubenswrapper[5124]: I0126 00:22:20.083896 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-p756r" event={"ID":"c283d038-4574-4bf6-a5e3-203f888f1367","Type":"ContainerStarted","Data":"7ed15fab4846cd64a3cf0394a3b36f1423d04511f8706eba7b29d2289ede7297"} Jan 26 00:22:20 crc kubenswrapper[5124]: I0126 00:22:20.144984 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489782-p756r" podStartSLOduration=19.031840112 podStartE2EDuration="20.144963019s" podCreationTimestamp="2026-01-26 00:22:00 +0000 UTC" firstStartedPulling="2026-01-26 00:22:18.360615412 +0000 UTC m=+816.269534761" lastFinishedPulling="2026-01-26 00:22:19.473738319 +0000 UTC m=+817.382657668" observedRunningTime="2026-01-26 00:22:20.133989073 +0000 UTC m=+818.042908422" watchObservedRunningTime="2026-01-26 00:22:20.144963019 +0000 UTC m=+818.053882368" Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.092026 5124 generic.go:358] "Generic (PLEG): container finished" podID="c283d038-4574-4bf6-a5e3-203f888f1367" containerID="7ed15fab4846cd64a3cf0394a3b36f1423d04511f8706eba7b29d2289ede7297" exitCode=0 Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.092128 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-p756r" event={"ID":"c283d038-4574-4bf6-a5e3-203f888f1367","Type":"ContainerDied","Data":"7ed15fab4846cd64a3cf0394a3b36f1423d04511f8706eba7b29d2289ede7297"} Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.096006 5124 generic.go:358] "Generic (PLEG): container finished" podID="902ad87b-a27d-49e6-a2e3-1e3e274d16d1" containerID="66d437cf8334889d13648608c95ccbfb89ca37cdd2bc9e47e08f469c72d09f41" exitCode=0 Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.096104 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"902ad87b-a27d-49e6-a2e3-1e3e274d16d1","Type":"ContainerDied","Data":"66d437cf8334889d13648608c95ccbfb89ca37cdd2bc9e47e08f469c72d09f41"} Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.734034 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb"] Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.738106 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.740864 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.742656 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.743135 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-z4c4g\"" Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.745417 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb"] Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.847917 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d221106-7c92-4968-8bb3-20be6806e046-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-9s5jb\" (UID: \"4d221106-7c92-4968-8bb3-20be6806e046\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.847983 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94p2f\" (UniqueName: \"kubernetes.io/projected/4d221106-7c92-4968-8bb3-20be6806e046-kube-api-access-94p2f\") pod \"cert-manager-webhook-7894b5b9b4-9s5jb\" (UID: \"4d221106-7c92-4968-8bb3-20be6806e046\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.949767 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-94p2f\" (UniqueName: \"kubernetes.io/projected/4d221106-7c92-4968-8bb3-20be6806e046-kube-api-access-94p2f\") pod \"cert-manager-webhook-7894b5b9b4-9s5jb\" (UID: \"4d221106-7c92-4968-8bb3-20be6806e046\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.949897 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d221106-7c92-4968-8bb3-20be6806e046-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-9s5jb\" (UID: \"4d221106-7c92-4968-8bb3-20be6806e046\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.971224 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d221106-7c92-4968-8bb3-20be6806e046-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-9s5jb\" (UID: \"4d221106-7c92-4968-8bb3-20be6806e046\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" Jan 26 00:22:21 crc kubenswrapper[5124]: I0126 00:22:21.971938 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-94p2f\" (UniqueName: \"kubernetes.io/projected/4d221106-7c92-4968-8bb3-20be6806e046-kube-api-access-94p2f\") pod \"cert-manager-webhook-7894b5b9b4-9s5jb\" (UID: \"4d221106-7c92-4968-8bb3-20be6806e046\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" Jan 26 00:22:22 crc kubenswrapper[5124]: I0126 00:22:22.052199 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" Jan 26 00:22:22 crc kubenswrapper[5124]: I0126 00:22:22.105758 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"902ad87b-a27d-49e6-a2e3-1e3e274d16d1","Type":"ContainerStarted","Data":"a1ab999a1a016f579a6344cdcccaed2e985ae243550665c250ec757c72539c84"} Jan 26 00:22:22 crc kubenswrapper[5124]: I0126 00:22:22.106265 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:22 crc kubenswrapper[5124]: I0126 00:22:22.140771 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=4.918885768 podStartE2EDuration="30.140741437s" podCreationTimestamp="2026-01-26 00:21:52 +0000 UTC" firstStartedPulling="2026-01-26 00:21:53.135151956 +0000 UTC m=+791.044071295" lastFinishedPulling="2026-01-26 00:22:18.357007625 +0000 UTC m=+816.265926964" observedRunningTime="2026-01-26 00:22:22.138039654 +0000 UTC m=+820.046959003" watchObservedRunningTime="2026-01-26 00:22:22.140741437 +0000 UTC m=+820.049660786" Jan 26 00:22:22 crc kubenswrapper[5124]: I0126 00:22:22.296300 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb"] Jan 26 00:22:22 crc kubenswrapper[5124]: W0126 00:22:22.313309 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d221106_7c92_4968_8bb3_20be6806e046.slice/crio-5eda16d73dffa4f7edc1831b96a3a28e96f75a07ab0667bd1b4cd8443db7ac38 WatchSource:0}: Error finding container 5eda16d73dffa4f7edc1831b96a3a28e96f75a07ab0667bd1b4cd8443db7ac38: Status 404 returned error can't find the container with id 5eda16d73dffa4f7edc1831b96a3a28e96f75a07ab0667bd1b4cd8443db7ac38 Jan 26 00:22:22 crc kubenswrapper[5124]: I0126 00:22:22.374157 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-p756r" Jan 26 00:22:22 crc kubenswrapper[5124]: I0126 00:22:22.456442 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq29w\" (UniqueName: \"kubernetes.io/projected/c283d038-4574-4bf6-a5e3-203f888f1367-kube-api-access-bq29w\") pod \"c283d038-4574-4bf6-a5e3-203f888f1367\" (UID: \"c283d038-4574-4bf6-a5e3-203f888f1367\") " Jan 26 00:22:22 crc kubenswrapper[5124]: I0126 00:22:22.460999 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c283d038-4574-4bf6-a5e3-203f888f1367-kube-api-access-bq29w" (OuterVolumeSpecName: "kube-api-access-bq29w") pod "c283d038-4574-4bf6-a5e3-203f888f1367" (UID: "c283d038-4574-4bf6-a5e3-203f888f1367"). InnerVolumeSpecName "kube-api-access-bq29w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:22 crc kubenswrapper[5124]: I0126 00:22:22.558138 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bq29w\" (UniqueName: \"kubernetes.io/projected/c283d038-4574-4bf6-a5e3-203f888f1367-kube-api-access-bq29w\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.112540 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" event={"ID":"4d221106-7c92-4968-8bb3-20be6806e046","Type":"ContainerStarted","Data":"5eda16d73dffa4f7edc1831b96a3a28e96f75a07ab0667bd1b4cd8443db7ac38"} Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.115025 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-p756r" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.118838 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-p756r" event={"ID":"c283d038-4574-4bf6-a5e3-203f888f1367","Type":"ContainerDied","Data":"39ce0c4d226e93a8704184c5a0df73aeadf76f57964031032398f96fb893b349"} Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.118905 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39ce0c4d226e93a8704184c5a0df73aeadf76f57964031032398f96fb893b349" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.464873 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489776-zlkf6"] Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.469464 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489776-zlkf6"] Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.574687 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.575810 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c283d038-4574-4bf6-a5e3-203f888f1367" containerName="oc" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.575837 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="c283d038-4574-4bf6-a5e3-203f888f1367" containerName="oc" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.575975 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="c283d038-4574-4bf6-a5e3-203f888f1367" containerName="oc" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.579898 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.581972 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.583074 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-cbnx8\"" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.583243 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.583296 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.599486 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.672921 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.672978 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/fcfbd368-aab6-40d7-bccc-a45032836a7c-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.673003 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gcp9\" (UniqueName: \"kubernetes.io/projected/fcfbd368-aab6-40d7-bccc-a45032836a7c-kube-api-access-4gcp9\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.673042 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.673062 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.673083 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.673139 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcfbd368-aab6-40d7-bccc-a45032836a7c-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.673175 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.673211 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/fcfbd368-aab6-40d7-bccc-a45032836a7c-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.673237 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fcfbd368-aab6-40d7-bccc-a45032836a7c-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.673256 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.673287 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.774305 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.774388 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcfbd368-aab6-40d7-bccc-a45032836a7c-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.774416 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.774450 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/fcfbd368-aab6-40d7-bccc-a45032836a7c-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.774478 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fcfbd368-aab6-40d7-bccc-a45032836a7c-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.774498 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.774528 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.774599 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.774626 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/fcfbd368-aab6-40d7-bccc-a45032836a7c-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.774647 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4gcp9\" (UniqueName: \"kubernetes.io/projected/fcfbd368-aab6-40d7-bccc-a45032836a7c-kube-api-access-4gcp9\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.774687 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.774714 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.776162 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.777148 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.777287 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcfbd368-aab6-40d7-bccc-a45032836a7c-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.778004 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.778363 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.778433 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fcfbd368-aab6-40d7-bccc-a45032836a7c-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.778611 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.778691 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.779811 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.783563 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/fcfbd368-aab6-40d7-bccc-a45032836a7c-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.784707 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/fcfbd368-aab6-40d7-bccc-a45032836a7c-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.803284 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gcp9\" (UniqueName: \"kubernetes.io/projected/fcfbd368-aab6-40d7-bccc-a45032836a7c-kube-api-access-4gcp9\") pod \"service-telemetry-operator-1-build\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:23 crc kubenswrapper[5124]: I0126 00:22:23.903361 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:24 crc kubenswrapper[5124]: I0126 00:22:24.166864 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:22:24 crc kubenswrapper[5124]: W0126 00:22:24.173376 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcfbd368_aab6_40d7_bccc_a45032836a7c.slice/crio-dfe643465425137f85eb5fede04ebf84c944f131a429efde62b02ec9ee41b637 WatchSource:0}: Error finding container dfe643465425137f85eb5fede04ebf84c944f131a429efde62b02ec9ee41b637: Status 404 returned error can't find the container with id dfe643465425137f85eb5fede04ebf84c944f131a429efde62b02ec9ee41b637 Jan 26 00:22:24 crc kubenswrapper[5124]: I0126 00:22:24.375766 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe2d2b1-e495-4127-bda5-97d67b08dc73" path="/var/lib/kubelet/pods/3fe2d2b1-e495-4127-bda5-97d67b08dc73/volumes" Jan 26 00:22:25 crc kubenswrapper[5124]: I0126 00:22:25.124662 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"fcfbd368-aab6-40d7-bccc-a45032836a7c","Type":"ContainerStarted","Data":"dfe643465425137f85eb5fede04ebf84c944f131a429efde62b02ec9ee41b637"} Jan 26 00:22:28 crc kubenswrapper[5124]: I0126 00:22:28.078388 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b"] Jan 26 00:22:28 crc kubenswrapper[5124]: I0126 00:22:28.085631 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b" Jan 26 00:22:28 crc kubenswrapper[5124]: I0126 00:22:28.087903 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-p8644\"" Jan 26 00:22:28 crc kubenswrapper[5124]: I0126 00:22:28.091901 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b"] Jan 26 00:22:28 crc kubenswrapper[5124]: I0126 00:22:28.243115 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a891fe1-31ca-4a63-bdba-3c5a8857eec1-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-dth5b\" (UID: \"3a891fe1-31ca-4a63-bdba-3c5a8857eec1\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b" Jan 26 00:22:28 crc kubenswrapper[5124]: I0126 00:22:28.243211 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czjz6\" (UniqueName: \"kubernetes.io/projected/3a891fe1-31ca-4a63-bdba-3c5a8857eec1-kube-api-access-czjz6\") pod \"cert-manager-cainjector-7dbf76d5c8-dth5b\" (UID: \"3a891fe1-31ca-4a63-bdba-3c5a8857eec1\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b" Jan 26 00:22:28 crc kubenswrapper[5124]: I0126 00:22:28.344603 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a891fe1-31ca-4a63-bdba-3c5a8857eec1-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-dth5b\" (UID: \"3a891fe1-31ca-4a63-bdba-3c5a8857eec1\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b" Jan 26 00:22:28 crc kubenswrapper[5124]: I0126 00:22:28.344648 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-czjz6\" (UniqueName: \"kubernetes.io/projected/3a891fe1-31ca-4a63-bdba-3c5a8857eec1-kube-api-access-czjz6\") pod \"cert-manager-cainjector-7dbf76d5c8-dth5b\" (UID: \"3a891fe1-31ca-4a63-bdba-3c5a8857eec1\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b" Jan 26 00:22:28 crc kubenswrapper[5124]: I0126 00:22:28.364390 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a891fe1-31ca-4a63-bdba-3c5a8857eec1-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-dth5b\" (UID: \"3a891fe1-31ca-4a63-bdba-3c5a8857eec1\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b" Jan 26 00:22:28 crc kubenswrapper[5124]: I0126 00:22:28.364867 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-czjz6\" (UniqueName: \"kubernetes.io/projected/3a891fe1-31ca-4a63-bdba-3c5a8857eec1-kube-api-access-czjz6\") pod \"cert-manager-cainjector-7dbf76d5c8-dth5b\" (UID: \"3a891fe1-31ca-4a63-bdba-3c5a8857eec1\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b" Jan 26 00:22:28 crc kubenswrapper[5124]: I0126 00:22:28.401698 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b" Jan 26 00:22:33 crc kubenswrapper[5124]: I0126 00:22:33.210409 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="902ad87b-a27d-49e6-a2e3-1e3e274d16d1" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:22:33 crc kubenswrapper[5124]: {"timestamp": "2026-01-26T00:22:33+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:22:33 crc kubenswrapper[5124]: > Jan 26 00:22:33 crc kubenswrapper[5124]: I0126 00:22:33.595079 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.299410 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.305311 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.307257 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.307267 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.307478 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.326703 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.464540 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd238caf-5876-429a-9f3a-594804065e20-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.464627 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.464745 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/cd238caf-5876-429a-9f3a-594804065e20-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.464818 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.464841 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.464892 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.464926 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/cd238caf-5876-429a-9f3a-594804065e20-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.464945 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.464968 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xd6q\" (UniqueName: \"kubernetes.io/projected/cd238caf-5876-429a-9f3a-594804065e20-kube-api-access-7xd6q\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.464994 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.465034 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.465070 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd238caf-5876-429a-9f3a-594804065e20-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.566642 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/cd238caf-5876-429a-9f3a-594804065e20-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.567190 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.567212 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.567853 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.567913 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/cd238caf-5876-429a-9f3a-594804065e20-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.567938 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.567960 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7xd6q\" (UniqueName: \"kubernetes.io/projected/cd238caf-5876-429a-9f3a-594804065e20-kube-api-access-7xd6q\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.567990 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.567995 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.568037 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.568081 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd238caf-5876-429a-9f3a-594804065e20-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.568108 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd238caf-5876-429a-9f3a-594804065e20-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.568146 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.568238 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.569564 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.569761 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd238caf-5876-429a-9f3a-594804065e20-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.569811 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd238caf-5876-429a-9f3a-594804065e20-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.570156 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.573761 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.573947 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.578781 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/cd238caf-5876-429a-9f3a-594804065e20-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.585554 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xd6q\" (UniqueName: \"kubernetes.io/projected/cd238caf-5876-429a-9f3a-594804065e20-kube-api-access-7xd6q\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.604692 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.611077 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/cd238caf-5876-429a-9f3a-594804065e20-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-2-build\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.638976 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:35 crc kubenswrapper[5124]: I0126 00:22:35.653694 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b"] Jan 26 00:22:35 crc kubenswrapper[5124]: W0126 00:22:35.677466 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a891fe1_31ca_4a63_bdba_3c5a8857eec1.slice/crio-fd86528de86164cc417762ca9554f11e527f4bb547d13bd59a0ce09da4ea735f WatchSource:0}: Error finding container fd86528de86164cc417762ca9554f11e527f4bb547d13bd59a0ce09da4ea735f: Status 404 returned error can't find the container with id fd86528de86164cc417762ca9554f11e527f4bb547d13bd59a0ce09da4ea735f Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.000965 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.221846 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" event={"ID":"4d221106-7c92-4968-8bb3-20be6806e046","Type":"ContainerStarted","Data":"8ef9b22c2bc0dcaea017feb363ada1dbe88aa14cab16d6ba3700fd94073a1ce5"} Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.222155 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.239941 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"fcfbd368-aab6-40d7-bccc-a45032836a7c","Type":"ContainerStarted","Data":"57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126"} Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.240110 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="fcfbd368-aab6-40d7-bccc-a45032836a7c" containerName="manage-dockerfile" containerID="cri-o://57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126" gracePeriod=30 Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.241886 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" podStartSLOduration=2.130821025 podStartE2EDuration="15.241863927s" podCreationTimestamp="2026-01-26 00:22:21 +0000 UTC" firstStartedPulling="2026-01-26 00:22:22.3155201 +0000 UTC m=+820.224439449" lastFinishedPulling="2026-01-26 00:22:35.426562992 +0000 UTC m=+833.335482351" observedRunningTime="2026-01-26 00:22:36.238890407 +0000 UTC m=+834.147809756" watchObservedRunningTime="2026-01-26 00:22:36.241863927 +0000 UTC m=+834.150783276" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.247409 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b" event={"ID":"3a891fe1-31ca-4a63-bdba-3c5a8857eec1","Type":"ContainerStarted","Data":"ca8c626ebffa49bab569573f6824a29b5a41b032300ff9c0ee15e6408b294c9b"} Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.247444 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b" event={"ID":"3a891fe1-31ca-4a63-bdba-3c5a8857eec1","Type":"ContainerStarted","Data":"fd86528de86164cc417762ca9554f11e527f4bb547d13bd59a0ce09da4ea735f"} Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.250390 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"cd238caf-5876-429a-9f3a-594804065e20","Type":"ContainerStarted","Data":"e6957b7310ffabb3ec3b22c53354d48d3a7ce1c6e4a5f607af9c36e75bba13e0"} Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.293124 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-dth5b" podStartSLOduration=8.293111049 podStartE2EDuration="8.293111049s" podCreationTimestamp="2026-01-26 00:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:22:36.29130072 +0000 UTC m=+834.200220069" watchObservedRunningTime="2026-01-26 00:22:36.293111049 +0000 UTC m=+834.202030398" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.688831 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_fcfbd368-aab6-40d7-bccc-a45032836a7c/manage-dockerfile/0.log" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.689128 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.709894 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-proxy-ca-bundles\") pod \"fcfbd368-aab6-40d7-bccc-a45032836a7c\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.709935 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcfbd368-aab6-40d7-bccc-a45032836a7c-node-pullsecrets\") pod \"fcfbd368-aab6-40d7-bccc-a45032836a7c\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710031 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-container-storage-run\") pod \"fcfbd368-aab6-40d7-bccc-a45032836a7c\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710060 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcfbd368-aab6-40d7-bccc-a45032836a7c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "fcfbd368-aab6-40d7-bccc-a45032836a7c" (UID: "fcfbd368-aab6-40d7-bccc-a45032836a7c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710132 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-blob-cache\") pod \"fcfbd368-aab6-40d7-bccc-a45032836a7c\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710213 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gcp9\" (UniqueName: \"kubernetes.io/projected/fcfbd368-aab6-40d7-bccc-a45032836a7c-kube-api-access-4gcp9\") pod \"fcfbd368-aab6-40d7-bccc-a45032836a7c\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710241 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fcfbd368-aab6-40d7-bccc-a45032836a7c-buildcachedir\") pod \"fcfbd368-aab6-40d7-bccc-a45032836a7c\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710269 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/fcfbd368-aab6-40d7-bccc-a45032836a7c-builder-dockercfg-cbnx8-pull\") pod \"fcfbd368-aab6-40d7-bccc-a45032836a7c\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710291 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "fcfbd368-aab6-40d7-bccc-a45032836a7c" (UID: "fcfbd368-aab6-40d7-bccc-a45032836a7c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710308 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/fcfbd368-aab6-40d7-bccc-a45032836a7c-builder-dockercfg-cbnx8-push\") pod \"fcfbd368-aab6-40d7-bccc-a45032836a7c\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710334 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-ca-bundles\") pod \"fcfbd368-aab6-40d7-bccc-a45032836a7c\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710356 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-container-storage-root\") pod \"fcfbd368-aab6-40d7-bccc-a45032836a7c\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710417 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-buildworkdir\") pod \"fcfbd368-aab6-40d7-bccc-a45032836a7c\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710468 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-system-configs\") pod \"fcfbd368-aab6-40d7-bccc-a45032836a7c\" (UID: \"fcfbd368-aab6-40d7-bccc-a45032836a7c\") " Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710622 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "fcfbd368-aab6-40d7-bccc-a45032836a7c" (UID: "fcfbd368-aab6-40d7-bccc-a45032836a7c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710886 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "fcfbd368-aab6-40d7-bccc-a45032836a7c" (UID: "fcfbd368-aab6-40d7-bccc-a45032836a7c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710950 5124 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710964 5124 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710972 5124 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcfbd368-aab6-40d7-bccc-a45032836a7c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.710971 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "fcfbd368-aab6-40d7-bccc-a45032836a7c" (UID: "fcfbd368-aab6-40d7-bccc-a45032836a7c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.711052 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcfbd368-aab6-40d7-bccc-a45032836a7c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "fcfbd368-aab6-40d7-bccc-a45032836a7c" (UID: "fcfbd368-aab6-40d7-bccc-a45032836a7c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.711387 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "fcfbd368-aab6-40d7-bccc-a45032836a7c" (UID: "fcfbd368-aab6-40d7-bccc-a45032836a7c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.711581 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "fcfbd368-aab6-40d7-bccc-a45032836a7c" (UID: "fcfbd368-aab6-40d7-bccc-a45032836a7c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.712135 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "fcfbd368-aab6-40d7-bccc-a45032836a7c" (UID: "fcfbd368-aab6-40d7-bccc-a45032836a7c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.720816 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcfbd368-aab6-40d7-bccc-a45032836a7c-builder-dockercfg-cbnx8-push" (OuterVolumeSpecName: "builder-dockercfg-cbnx8-push") pod "fcfbd368-aab6-40d7-bccc-a45032836a7c" (UID: "fcfbd368-aab6-40d7-bccc-a45032836a7c"). InnerVolumeSpecName "builder-dockercfg-cbnx8-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.723476 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcfbd368-aab6-40d7-bccc-a45032836a7c-kube-api-access-4gcp9" (OuterVolumeSpecName: "kube-api-access-4gcp9") pod "fcfbd368-aab6-40d7-bccc-a45032836a7c" (UID: "fcfbd368-aab6-40d7-bccc-a45032836a7c"). InnerVolumeSpecName "kube-api-access-4gcp9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.726711 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcfbd368-aab6-40d7-bccc-a45032836a7c-builder-dockercfg-cbnx8-pull" (OuterVolumeSpecName: "builder-dockercfg-cbnx8-pull") pod "fcfbd368-aab6-40d7-bccc-a45032836a7c" (UID: "fcfbd368-aab6-40d7-bccc-a45032836a7c"). InnerVolumeSpecName "builder-dockercfg-cbnx8-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.812395 5124 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.812461 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4gcp9\" (UniqueName: \"kubernetes.io/projected/fcfbd368-aab6-40d7-bccc-a45032836a7c-kube-api-access-4gcp9\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.812486 5124 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fcfbd368-aab6-40d7-bccc-a45032836a7c-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.812501 5124 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/fcfbd368-aab6-40d7-bccc-a45032836a7c-builder-dockercfg-cbnx8-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.812516 5124 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/fcfbd368-aab6-40d7-bccc-a45032836a7c-builder-dockercfg-cbnx8-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.812531 5124 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.812546 5124 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.812562 5124 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fcfbd368-aab6-40d7-bccc-a45032836a7c-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:36 crc kubenswrapper[5124]: I0126 00:22:36.812578 5124 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fcfbd368-aab6-40d7-bccc-a45032836a7c-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:37 crc kubenswrapper[5124]: I0126 00:22:37.257987 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_fcfbd368-aab6-40d7-bccc-a45032836a7c/manage-dockerfile/0.log" Jan 26 00:22:37 crc kubenswrapper[5124]: I0126 00:22:37.258030 5124 generic.go:358] "Generic (PLEG): container finished" podID="fcfbd368-aab6-40d7-bccc-a45032836a7c" containerID="57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126" exitCode=1 Jan 26 00:22:37 crc kubenswrapper[5124]: I0126 00:22:37.258164 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"fcfbd368-aab6-40d7-bccc-a45032836a7c","Type":"ContainerDied","Data":"57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126"} Jan 26 00:22:37 crc kubenswrapper[5124]: I0126 00:22:37.258193 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"fcfbd368-aab6-40d7-bccc-a45032836a7c","Type":"ContainerDied","Data":"dfe643465425137f85eb5fede04ebf84c944f131a429efde62b02ec9ee41b637"} Jan 26 00:22:37 crc kubenswrapper[5124]: I0126 00:22:37.258215 5124 scope.go:117] "RemoveContainer" containerID="57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126" Jan 26 00:22:37 crc kubenswrapper[5124]: I0126 00:22:37.258343 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:37 crc kubenswrapper[5124]: I0126 00:22:37.268906 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"cd238caf-5876-429a-9f3a-594804065e20","Type":"ContainerStarted","Data":"119519b53e45716170d5dd5dc54830094fb62b4a51947ab29c09936224221bbe"} Jan 26 00:22:37 crc kubenswrapper[5124]: I0126 00:22:37.286033 5124 scope.go:117] "RemoveContainer" containerID="57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126" Jan 26 00:22:37 crc kubenswrapper[5124]: E0126 00:22:37.288935 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126\": container with ID starting with 57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126 not found: ID does not exist" containerID="57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126" Jan 26 00:22:37 crc kubenswrapper[5124]: I0126 00:22:37.288968 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126"} err="failed to get container status \"57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126\": rpc error: code = NotFound desc = could not find container \"57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126\": container with ID starting with 57594037e4f498e8348e73c0ac7a05e953561944eef7dcf85291234f0fbf4126 not found: ID does not exist" Jan 26 00:22:37 crc kubenswrapper[5124]: I0126 00:22:37.377946 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:22:37 crc kubenswrapper[5124]: I0126 00:22:37.389311 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:22:38 crc kubenswrapper[5124]: I0126 00:22:38.274292 5124 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="902ad87b-a27d-49e6-a2e3-1e3e274d16d1" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:22:38 crc kubenswrapper[5124]: {"timestamp": "2026-01-26T00:22:38+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:22:38 crc kubenswrapper[5124]: > Jan 26 00:22:38 crc kubenswrapper[5124]: I0126 00:22:38.372705 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcfbd368-aab6-40d7-bccc-a45032836a7c" path="/var/lib/kubelet/pods/fcfbd368-aab6-40d7-bccc-a45032836a7c/volumes" Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.677866 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-cbk4b"] Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.678960 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fcfbd368-aab6-40d7-bccc-a45032836a7c" containerName="manage-dockerfile" Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.678974 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcfbd368-aab6-40d7-bccc-a45032836a7c" containerName="manage-dockerfile" Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.679077 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="fcfbd368-aab6-40d7-bccc-a45032836a7c" containerName="manage-dockerfile" Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.683237 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-cbk4b" Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.687920 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-m92cp\"" Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.689261 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-cbk4b"] Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.765601 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2dxn\" (UniqueName: \"kubernetes.io/projected/e2e31e19-e327-45be-a96e-c0164687516e-kube-api-access-v2dxn\") pod \"cert-manager-858d87f86b-cbk4b\" (UID: \"e2e31e19-e327-45be-a96e-c0164687516e\") " pod="cert-manager/cert-manager-858d87f86b-cbk4b" Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.765751 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2e31e19-e327-45be-a96e-c0164687516e-bound-sa-token\") pod \"cert-manager-858d87f86b-cbk4b\" (UID: \"e2e31e19-e327-45be-a96e-c0164687516e\") " pod="cert-manager/cert-manager-858d87f86b-cbk4b" Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.866497 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2e31e19-e327-45be-a96e-c0164687516e-bound-sa-token\") pod \"cert-manager-858d87f86b-cbk4b\" (UID: \"e2e31e19-e327-45be-a96e-c0164687516e\") " pod="cert-manager/cert-manager-858d87f86b-cbk4b" Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.866576 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v2dxn\" (UniqueName: \"kubernetes.io/projected/e2e31e19-e327-45be-a96e-c0164687516e-kube-api-access-v2dxn\") pod \"cert-manager-858d87f86b-cbk4b\" (UID: \"e2e31e19-e327-45be-a96e-c0164687516e\") " pod="cert-manager/cert-manager-858d87f86b-cbk4b" Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.894422 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2dxn\" (UniqueName: \"kubernetes.io/projected/e2e31e19-e327-45be-a96e-c0164687516e-kube-api-access-v2dxn\") pod \"cert-manager-858d87f86b-cbk4b\" (UID: \"e2e31e19-e327-45be-a96e-c0164687516e\") " pod="cert-manager/cert-manager-858d87f86b-cbk4b" Jan 26 00:22:40 crc kubenswrapper[5124]: I0126 00:22:40.900533 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2e31e19-e327-45be-a96e-c0164687516e-bound-sa-token\") pod \"cert-manager-858d87f86b-cbk4b\" (UID: \"e2e31e19-e327-45be-a96e-c0164687516e\") " pod="cert-manager/cert-manager-858d87f86b-cbk4b" Jan 26 00:22:41 crc kubenswrapper[5124]: I0126 00:22:41.036922 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-cbk4b" Jan 26 00:22:41 crc kubenswrapper[5124]: I0126 00:22:41.502446 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-cbk4b"] Jan 26 00:22:42 crc kubenswrapper[5124]: I0126 00:22:42.275017 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-9s5jb" Jan 26 00:22:42 crc kubenswrapper[5124]: I0126 00:22:42.331644 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-cbk4b" event={"ID":"e2e31e19-e327-45be-a96e-c0164687516e","Type":"ContainerStarted","Data":"f26be04a729c109210e2801b8b75069293ca52002f49a02e245259aa9c60c997"} Jan 26 00:22:42 crc kubenswrapper[5124]: I0126 00:22:42.331695 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-cbk4b" event={"ID":"e2e31e19-e327-45be-a96e-c0164687516e","Type":"ContainerStarted","Data":"41be0edfa653cb4ef47d88e1542c397cb8316418d3223c49fc36830cc4c7d49b"} Jan 26 00:22:42 crc kubenswrapper[5124]: I0126 00:22:42.356531 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-cbk4b" podStartSLOduration=2.356510604 podStartE2EDuration="2.356510604s" podCreationTimestamp="2026-01-26 00:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:22:42.355497817 +0000 UTC m=+840.264417166" watchObservedRunningTime="2026-01-26 00:22:42.356510604 +0000 UTC m=+840.265429953" Jan 26 00:22:43 crc kubenswrapper[5124]: I0126 00:22:43.508151 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:44 crc kubenswrapper[5124]: I0126 00:22:44.346018 5124 generic.go:358] "Generic (PLEG): container finished" podID="cd238caf-5876-429a-9f3a-594804065e20" containerID="119519b53e45716170d5dd5dc54830094fb62b4a51947ab29c09936224221bbe" exitCode=0 Jan 26 00:22:44 crc kubenswrapper[5124]: I0126 00:22:44.346124 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"cd238caf-5876-429a-9f3a-594804065e20","Type":"ContainerDied","Data":"119519b53e45716170d5dd5dc54830094fb62b4a51947ab29c09936224221bbe"} Jan 26 00:22:48 crc kubenswrapper[5124]: I0126 00:22:48.394772 5124 generic.go:358] "Generic (PLEG): container finished" podID="cd238caf-5876-429a-9f3a-594804065e20" containerID="0f07511e2e91ede69e43eb23767779ea1475a8664fbca15c7428bb71aecea725" exitCode=0 Jan 26 00:22:48 crc kubenswrapper[5124]: I0126 00:22:48.394910 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"cd238caf-5876-429a-9f3a-594804065e20","Type":"ContainerDied","Data":"0f07511e2e91ede69e43eb23767779ea1475a8664fbca15c7428bb71aecea725"} Jan 26 00:22:48 crc kubenswrapper[5124]: I0126 00:22:48.442006 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_cd238caf-5876-429a-9f3a-594804065e20/manage-dockerfile/0.log" Jan 26 00:22:49 crc kubenswrapper[5124]: I0126 00:22:49.407121 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"cd238caf-5876-429a-9f3a-594804065e20","Type":"ContainerStarted","Data":"f186a0c844259c3f0cf2bf814c73d22746f4c2a36f338127499ec597c9feda3a"} Jan 26 00:22:49 crc kubenswrapper[5124]: I0126 00:22:49.449077 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=14.449049961 podStartE2EDuration="14.449049961s" podCreationTimestamp="2026-01-26 00:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:22:49.442271629 +0000 UTC m=+847.351190978" watchObservedRunningTime="2026-01-26 00:22:49.449049961 +0000 UTC m=+847.357969310" Jan 26 00:22:50 crc kubenswrapper[5124]: I0126 00:22:50.663609 5124 scope.go:117] "RemoveContainer" containerID="438835d18323e2f1e3678c7785469844146c6987d7930abea00ea95eac4ca4d9" Jan 26 00:23:10 crc kubenswrapper[5124]: I0126 00:23:10.829952 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:23:10 crc kubenswrapper[5124]: I0126 00:23:10.830548 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:23:40 crc kubenswrapper[5124]: I0126 00:23:40.829881 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:23:40 crc kubenswrapper[5124]: I0126 00:23:40.830459 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:23:42 crc kubenswrapper[5124]: I0126 00:23:42.673437 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-smnb7_f826f136-a910-4120-aa62-a08e427590c0/kube-multus/0.log" Jan 26 00:23:42 crc kubenswrapper[5124]: I0126 00:23:42.673669 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-smnb7_f826f136-a910-4120-aa62-a08e427590c0/kube-multus/0.log" Jan 26 00:23:42 crc kubenswrapper[5124]: I0126 00:23:42.680856 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:23:42 crc kubenswrapper[5124]: I0126 00:23:42.680900 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:24:00 crc kubenswrapper[5124]: I0126 00:24:00.134075 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489784-2zkk4"] Jan 26 00:24:00 crc kubenswrapper[5124]: I0126 00:24:00.138994 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-2zkk4" Jan 26 00:24:00 crc kubenswrapper[5124]: I0126 00:24:00.141298 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:24:00 crc kubenswrapper[5124]: I0126 00:24:00.142478 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:24:00 crc kubenswrapper[5124]: I0126 00:24:00.142652 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-26tfw\"" Jan 26 00:24:00 crc kubenswrapper[5124]: I0126 00:24:00.145729 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-2zkk4"] Jan 26 00:24:00 crc kubenswrapper[5124]: I0126 00:24:00.300464 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbkqd\" (UniqueName: \"kubernetes.io/projected/e1b9c748-aa0b-49ff-8f11-47a7a1ca7512-kube-api-access-fbkqd\") pod \"auto-csr-approver-29489784-2zkk4\" (UID: \"e1b9c748-aa0b-49ff-8f11-47a7a1ca7512\") " pod="openshift-infra/auto-csr-approver-29489784-2zkk4" Jan 26 00:24:00 crc kubenswrapper[5124]: I0126 00:24:00.401816 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fbkqd\" (UniqueName: \"kubernetes.io/projected/e1b9c748-aa0b-49ff-8f11-47a7a1ca7512-kube-api-access-fbkqd\") pod \"auto-csr-approver-29489784-2zkk4\" (UID: \"e1b9c748-aa0b-49ff-8f11-47a7a1ca7512\") " pod="openshift-infra/auto-csr-approver-29489784-2zkk4" Jan 26 00:24:00 crc kubenswrapper[5124]: I0126 00:24:00.421982 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbkqd\" (UniqueName: \"kubernetes.io/projected/e1b9c748-aa0b-49ff-8f11-47a7a1ca7512-kube-api-access-fbkqd\") pod \"auto-csr-approver-29489784-2zkk4\" (UID: \"e1b9c748-aa0b-49ff-8f11-47a7a1ca7512\") " pod="openshift-infra/auto-csr-approver-29489784-2zkk4" Jan 26 00:24:00 crc kubenswrapper[5124]: I0126 00:24:00.453774 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-2zkk4" Jan 26 00:24:00 crc kubenswrapper[5124]: I0126 00:24:00.697847 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-2zkk4"] Jan 26 00:24:01 crc kubenswrapper[5124]: I0126 00:24:01.012209 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-2zkk4" event={"ID":"e1b9c748-aa0b-49ff-8f11-47a7a1ca7512","Type":"ContainerStarted","Data":"f077928e27572ce8391bd0b5b8ba866c41f6017862a1e228f306158895aa44ee"} Jan 26 00:24:02 crc kubenswrapper[5124]: I0126 00:24:02.019445 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-2zkk4" event={"ID":"e1b9c748-aa0b-49ff-8f11-47a7a1ca7512","Type":"ContainerStarted","Data":"7ee5c262734c1d12b0e010537b9bdf00b0bed56891f103531a465a30793fce02"} Jan 26 00:24:02 crc kubenswrapper[5124]: I0126 00:24:02.041844 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489784-2zkk4" podStartSLOduration=1.108297622 podStartE2EDuration="2.041824537s" podCreationTimestamp="2026-01-26 00:24:00 +0000 UTC" firstStartedPulling="2026-01-26 00:24:00.713606457 +0000 UTC m=+918.622525806" lastFinishedPulling="2026-01-26 00:24:01.647133372 +0000 UTC m=+919.556052721" observedRunningTime="2026-01-26 00:24:02.04043046 +0000 UTC m=+919.949349819" watchObservedRunningTime="2026-01-26 00:24:02.041824537 +0000 UTC m=+919.950743886" Jan 26 00:24:03 crc kubenswrapper[5124]: I0126 00:24:03.026293 5124 generic.go:358] "Generic (PLEG): container finished" podID="e1b9c748-aa0b-49ff-8f11-47a7a1ca7512" containerID="7ee5c262734c1d12b0e010537b9bdf00b0bed56891f103531a465a30793fce02" exitCode=0 Jan 26 00:24:03 crc kubenswrapper[5124]: I0126 00:24:03.026374 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-2zkk4" event={"ID":"e1b9c748-aa0b-49ff-8f11-47a7a1ca7512","Type":"ContainerDied","Data":"7ee5c262734c1d12b0e010537b9bdf00b0bed56891f103531a465a30793fce02"} Jan 26 00:24:04 crc kubenswrapper[5124]: I0126 00:24:04.264254 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-2zkk4" Jan 26 00:24:04 crc kubenswrapper[5124]: I0126 00:24:04.358993 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbkqd\" (UniqueName: \"kubernetes.io/projected/e1b9c748-aa0b-49ff-8f11-47a7a1ca7512-kube-api-access-fbkqd\") pod \"e1b9c748-aa0b-49ff-8f11-47a7a1ca7512\" (UID: \"e1b9c748-aa0b-49ff-8f11-47a7a1ca7512\") " Jan 26 00:24:04 crc kubenswrapper[5124]: I0126 00:24:04.367715 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b9c748-aa0b-49ff-8f11-47a7a1ca7512-kube-api-access-fbkqd" (OuterVolumeSpecName: "kube-api-access-fbkqd") pod "e1b9c748-aa0b-49ff-8f11-47a7a1ca7512" (UID: "e1b9c748-aa0b-49ff-8f11-47a7a1ca7512"). InnerVolumeSpecName "kube-api-access-fbkqd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:04 crc kubenswrapper[5124]: I0126 00:24:04.460782 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fbkqd\" (UniqueName: \"kubernetes.io/projected/e1b9c748-aa0b-49ff-8f11-47a7a1ca7512-kube-api-access-fbkqd\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:05 crc kubenswrapper[5124]: I0126 00:24:05.039882 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-2zkk4" Jan 26 00:24:05 crc kubenswrapper[5124]: I0126 00:24:05.039885 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-2zkk4" event={"ID":"e1b9c748-aa0b-49ff-8f11-47a7a1ca7512","Type":"ContainerDied","Data":"f077928e27572ce8391bd0b5b8ba866c41f6017862a1e228f306158895aa44ee"} Jan 26 00:24:05 crc kubenswrapper[5124]: I0126 00:24:05.040344 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f077928e27572ce8391bd0b5b8ba866c41f6017862a1e228f306158895aa44ee" Jan 26 00:24:05 crc kubenswrapper[5124]: I0126 00:24:05.091477 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-69swk"] Jan 26 00:24:05 crc kubenswrapper[5124]: I0126 00:24:05.097199 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-69swk"] Jan 26 00:24:06 crc kubenswrapper[5124]: I0126 00:24:06.372895 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ce4f34a-592b-4959-a248-ce0c338ddeea" path="/var/lib/kubelet/pods/3ce4f34a-592b-4959-a248-ce0c338ddeea/volumes" Jan 26 00:24:10 crc kubenswrapper[5124]: I0126 00:24:10.079661 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_cd238caf-5876-429a-9f3a-594804065e20/docker-build/0.log" Jan 26 00:24:10 crc kubenswrapper[5124]: I0126 00:24:10.080862 5124 generic.go:358] "Generic (PLEG): container finished" podID="cd238caf-5876-429a-9f3a-594804065e20" containerID="f186a0c844259c3f0cf2bf814c73d22746f4c2a36f338127499ec597c9feda3a" exitCode=1 Jan 26 00:24:10 crc kubenswrapper[5124]: I0126 00:24:10.080955 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"cd238caf-5876-429a-9f3a-594804065e20","Type":"ContainerDied","Data":"f186a0c844259c3f0cf2bf814c73d22746f4c2a36f338127499ec597c9feda3a"} Jan 26 00:24:10 crc kubenswrapper[5124]: I0126 00:24:10.830047 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:24:10 crc kubenswrapper[5124]: I0126 00:24:10.830420 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:24:10 crc kubenswrapper[5124]: I0126 00:24:10.830470 5124 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:24:10 crc kubenswrapper[5124]: I0126 00:24:10.831218 5124 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79635baa3ffeb5e4c69b5bd5a6a7d2d5fea58437cda8cef86f8317b3f38ad143"} pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:24:10 crc kubenswrapper[5124]: I0126 00:24:10.831284 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" containerID="cri-o://79635baa3ffeb5e4c69b5bd5a6a7d2d5fea58437cda8cef86f8317b3f38ad143" gracePeriod=600 Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.089614 5124 generic.go:358] "Generic (PLEG): container finished" podID="95fa0656-150a-4d93-a324-77a1306d91f7" containerID="79635baa3ffeb5e4c69b5bd5a6a7d2d5fea58437cda8cef86f8317b3f38ad143" exitCode=0 Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.089746 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerDied","Data":"79635baa3ffeb5e4c69b5bd5a6a7d2d5fea58437cda8cef86f8317b3f38ad143"} Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.089797 5124 scope.go:117] "RemoveContainer" containerID="bf0d2bc539a7272b2b55b13ae5225aa87fa06ada3cce31edaeaa612f3511ce10" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.326125 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_cd238caf-5876-429a-9f3a-594804065e20/docker-build/0.log" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.326862 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454298 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-system-configs\") pod \"cd238caf-5876-429a-9f3a-594804065e20\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454361 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xd6q\" (UniqueName: \"kubernetes.io/projected/cd238caf-5876-429a-9f3a-594804065e20-kube-api-access-7xd6q\") pod \"cd238caf-5876-429a-9f3a-594804065e20\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454414 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/cd238caf-5876-429a-9f3a-594804065e20-builder-dockercfg-cbnx8-pull\") pod \"cd238caf-5876-429a-9f3a-594804065e20\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454447 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd238caf-5876-429a-9f3a-594804065e20-buildcachedir\") pod \"cd238caf-5876-429a-9f3a-594804065e20\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454491 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-proxy-ca-bundles\") pod \"cd238caf-5876-429a-9f3a-594804065e20\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454511 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd238caf-5876-429a-9f3a-594804065e20-node-pullsecrets\") pod \"cd238caf-5876-429a-9f3a-594804065e20\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454574 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-build-blob-cache\") pod \"cd238caf-5876-429a-9f3a-594804065e20\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454559 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd238caf-5876-429a-9f3a-594804065e20-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cd238caf-5876-429a-9f3a-594804065e20" (UID: "cd238caf-5876-429a-9f3a-594804065e20"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454655 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd238caf-5876-429a-9f3a-594804065e20-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cd238caf-5876-429a-9f3a-594804065e20" (UID: "cd238caf-5876-429a-9f3a-594804065e20"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454720 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-container-storage-run\") pod \"cd238caf-5876-429a-9f3a-594804065e20\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454775 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-ca-bundles\") pod \"cd238caf-5876-429a-9f3a-594804065e20\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454924 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-buildworkdir\") pod \"cd238caf-5876-429a-9f3a-594804065e20\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.454989 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/cd238caf-5876-429a-9f3a-594804065e20-builder-dockercfg-cbnx8-push\") pod \"cd238caf-5876-429a-9f3a-594804065e20\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.455432 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cd238caf-5876-429a-9f3a-594804065e20" (UID: "cd238caf-5876-429a-9f3a-594804065e20"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.455714 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cd238caf-5876-429a-9f3a-594804065e20" (UID: "cd238caf-5876-429a-9f3a-594804065e20"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.455727 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cd238caf-5876-429a-9f3a-594804065e20" (UID: "cd238caf-5876-429a-9f3a-594804065e20"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.455912 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cd238caf-5876-429a-9f3a-594804065e20" (UID: "cd238caf-5876-429a-9f3a-594804065e20"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.455984 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-container-storage-root\") pod \"cd238caf-5876-429a-9f3a-594804065e20\" (UID: \"cd238caf-5876-429a-9f3a-594804065e20\") " Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.459934 5124 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.459955 5124 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd238caf-5876-429a-9f3a-594804065e20-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.459964 5124 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.459971 5124 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd238caf-5876-429a-9f3a-594804065e20-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.459979 5124 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.459987 5124 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd238caf-5876-429a-9f3a-594804065e20-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.460600 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd238caf-5876-429a-9f3a-594804065e20-kube-api-access-7xd6q" (OuterVolumeSpecName: "kube-api-access-7xd6q") pod "cd238caf-5876-429a-9f3a-594804065e20" (UID: "cd238caf-5876-429a-9f3a-594804065e20"). InnerVolumeSpecName "kube-api-access-7xd6q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.460744 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd238caf-5876-429a-9f3a-594804065e20-builder-dockercfg-cbnx8-pull" (OuterVolumeSpecName: "builder-dockercfg-cbnx8-pull") pod "cd238caf-5876-429a-9f3a-594804065e20" (UID: "cd238caf-5876-429a-9f3a-594804065e20"). InnerVolumeSpecName "builder-dockercfg-cbnx8-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.469686 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd238caf-5876-429a-9f3a-594804065e20-builder-dockercfg-cbnx8-push" (OuterVolumeSpecName: "builder-dockercfg-cbnx8-push") pod "cd238caf-5876-429a-9f3a-594804065e20" (UID: "cd238caf-5876-429a-9f3a-594804065e20"). InnerVolumeSpecName "builder-dockercfg-cbnx8-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.487752 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cd238caf-5876-429a-9f3a-594804065e20" (UID: "cd238caf-5876-429a-9f3a-594804065e20"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.561183 5124 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.561219 5124 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/cd238caf-5876-429a-9f3a-594804065e20-builder-dockercfg-cbnx8-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.561235 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7xd6q\" (UniqueName: \"kubernetes.io/projected/cd238caf-5876-429a-9f3a-594804065e20-kube-api-access-7xd6q\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.561243 5124 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/cd238caf-5876-429a-9f3a-594804065e20-builder-dockercfg-cbnx8-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.654051 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cd238caf-5876-429a-9f3a-594804065e20" (UID: "cd238caf-5876-429a-9f3a-594804065e20"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5124]: I0126 00:24:11.662624 5124 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:12 crc kubenswrapper[5124]: I0126 00:24:12.098054 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_cd238caf-5876-429a-9f3a-594804065e20/docker-build/0.log" Jan 26 00:24:12 crc kubenswrapper[5124]: I0126 00:24:12.099531 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:24:12 crc kubenswrapper[5124]: I0126 00:24:12.099550 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"cd238caf-5876-429a-9f3a-594804065e20","Type":"ContainerDied","Data":"e6957b7310ffabb3ec3b22c53354d48d3a7ce1c6e4a5f607af9c36e75bba13e0"} Jan 26 00:24:12 crc kubenswrapper[5124]: I0126 00:24:12.099604 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6957b7310ffabb3ec3b22c53354d48d3a7ce1c6e4a5f607af9c36e75bba13e0" Jan 26 00:24:12 crc kubenswrapper[5124]: I0126 00:24:12.102701 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerStarted","Data":"e4364654e7244afc307256d5ab68b10d1fea1b2d37b15d2d92ab4bb0d2fa9068"} Jan 26 00:24:13 crc kubenswrapper[5124]: I0126 00:24:13.251934 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cd238caf-5876-429a-9f3a-594804065e20" (UID: "cd238caf-5876-429a-9f3a-594804065e20"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:13 crc kubenswrapper[5124]: I0126 00:24:13.284652 5124 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd238caf-5876-429a-9f3a-594804065e20-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:21 crc kubenswrapper[5124]: I0126 00:24:21.876349 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:24:21 crc kubenswrapper[5124]: I0126 00:24:21.877419 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cd238caf-5876-429a-9f3a-594804065e20" containerName="docker-build" Jan 26 00:24:21 crc kubenswrapper[5124]: I0126 00:24:21.877431 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd238caf-5876-429a-9f3a-594804065e20" containerName="docker-build" Jan 26 00:24:21 crc kubenswrapper[5124]: I0126 00:24:21.877451 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cd238caf-5876-429a-9f3a-594804065e20" containerName="git-clone" Jan 26 00:24:21 crc kubenswrapper[5124]: I0126 00:24:21.877456 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd238caf-5876-429a-9f3a-594804065e20" containerName="git-clone" Jan 26 00:24:21 crc kubenswrapper[5124]: I0126 00:24:21.877466 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cd238caf-5876-429a-9f3a-594804065e20" containerName="manage-dockerfile" Jan 26 00:24:21 crc kubenswrapper[5124]: I0126 00:24:21.877472 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd238caf-5876-429a-9f3a-594804065e20" containerName="manage-dockerfile" Jan 26 00:24:21 crc kubenswrapper[5124]: I0126 00:24:21.877484 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e1b9c748-aa0b-49ff-8f11-47a7a1ca7512" containerName="oc" Jan 26 00:24:21 crc kubenswrapper[5124]: I0126 00:24:21.877490 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b9c748-aa0b-49ff-8f11-47a7a1ca7512" containerName="oc" Jan 26 00:24:21 crc kubenswrapper[5124]: I0126 00:24:21.877600 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="cd238caf-5876-429a-9f3a-594804065e20" containerName="docker-build" Jan 26 00:24:21 crc kubenswrapper[5124]: I0126 00:24:21.877615 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="e1b9c748-aa0b-49ff-8f11-47a7a1ca7512" containerName="oc" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.054943 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.055094 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.057309 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-global-ca\"" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.057334 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-ca\"" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.057456 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-sys-config\"" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.058643 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-cbnx8\"" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.108201 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.108414 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/907f0bcb-9b75-4ab6-b721-88558878d13b-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.108473 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.108624 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.108660 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfc6l\" (UniqueName: \"kubernetes.io/projected/907f0bcb-9b75-4ab6-b721-88558878d13b-kube-api-access-dfc6l\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.108719 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.108779 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/907f0bcb-9b75-4ab6-b721-88558878d13b-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.108803 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/907f0bcb-9b75-4ab6-b721-88558878d13b-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.108822 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.108861 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.108878 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/907f0bcb-9b75-4ab6-b721-88558878d13b-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.108906 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.210511 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.210610 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/907f0bcb-9b75-4ab6-b721-88558878d13b-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.210635 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.210670 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.210691 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dfc6l\" (UniqueName: \"kubernetes.io/projected/907f0bcb-9b75-4ab6-b721-88558878d13b-kube-api-access-dfc6l\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.210717 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.210746 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/907f0bcb-9b75-4ab6-b721-88558878d13b-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.210771 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/907f0bcb-9b75-4ab6-b721-88558878d13b-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.210792 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.210842 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.210871 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/907f0bcb-9b75-4ab6-b721-88558878d13b-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.210906 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.211036 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.211166 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/907f0bcb-9b75-4ab6-b721-88558878d13b-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.211162 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.211193 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.211254 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/907f0bcb-9b75-4ab6-b721-88558878d13b-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.211480 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.211923 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.212333 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.212690 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.216726 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/907f0bcb-9b75-4ab6-b721-88558878d13b-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.217978 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/907f0bcb-9b75-4ab6-b721-88558878d13b-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.234115 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfc6l\" (UniqueName: \"kubernetes.io/projected/907f0bcb-9b75-4ab6-b721-88558878d13b-kube-api-access-dfc6l\") pod \"service-telemetry-operator-3-build\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.369149 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:24:22 crc kubenswrapper[5124]: I0126 00:24:22.668257 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:24:23 crc kubenswrapper[5124]: I0126 00:24:23.179255 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"907f0bcb-9b75-4ab6-b721-88558878d13b","Type":"ContainerStarted","Data":"1d8a43ff30764487c74abf4ccf0c352ad8c54a8048b6b8d1fd5af248ae7b249c"} Jan 26 00:24:23 crc kubenswrapper[5124]: I0126 00:24:23.179301 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"907f0bcb-9b75-4ab6-b721-88558878d13b","Type":"ContainerStarted","Data":"07988d15347047b5612b632820fbad4abfb894efbd45686e6d08f43696a914f6"} Jan 26 00:24:31 crc kubenswrapper[5124]: I0126 00:24:31.236686 5124 generic.go:358] "Generic (PLEG): container finished" podID="907f0bcb-9b75-4ab6-b721-88558878d13b" containerID="1d8a43ff30764487c74abf4ccf0c352ad8c54a8048b6b8d1fd5af248ae7b249c" exitCode=0 Jan 26 00:24:31 crc kubenswrapper[5124]: I0126 00:24:31.236849 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"907f0bcb-9b75-4ab6-b721-88558878d13b","Type":"ContainerDied","Data":"1d8a43ff30764487c74abf4ccf0c352ad8c54a8048b6b8d1fd5af248ae7b249c"} Jan 26 00:24:32 crc kubenswrapper[5124]: I0126 00:24:32.250664 5124 generic.go:358] "Generic (PLEG): container finished" podID="907f0bcb-9b75-4ab6-b721-88558878d13b" containerID="fc0372277127e396b730e51ec370e0e9150fbd23a4d4aeca142a4a254a2e6e12" exitCode=0 Jan 26 00:24:32 crc kubenswrapper[5124]: I0126 00:24:32.250773 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"907f0bcb-9b75-4ab6-b721-88558878d13b","Type":"ContainerDied","Data":"fc0372277127e396b730e51ec370e0e9150fbd23a4d4aeca142a4a254a2e6e12"} Jan 26 00:24:32 crc kubenswrapper[5124]: I0126 00:24:32.313648 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_907f0bcb-9b75-4ab6-b721-88558878d13b/manage-dockerfile/0.log" Jan 26 00:24:33 crc kubenswrapper[5124]: I0126 00:24:33.260032 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"907f0bcb-9b75-4ab6-b721-88558878d13b","Type":"ContainerStarted","Data":"7e0d6ae6ccf88bdd5ab146811f43ec30e6802e3a78b4de9baa937317b66e0c9c"} Jan 26 00:24:33 crc kubenswrapper[5124]: I0126 00:24:33.299847 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-3-build" podStartSLOduration=12.299822017 podStartE2EDuration="12.299822017s" podCreationTimestamp="2026-01-26 00:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:24:33.295361058 +0000 UTC m=+951.204280427" watchObservedRunningTime="2026-01-26 00:24:33.299822017 +0000 UTC m=+951.208741366" Jan 26 00:24:52 crc kubenswrapper[5124]: I0126 00:24:52.079019 5124 scope.go:117] "RemoveContainer" containerID="a18a4115f1d6f85f746ece3d78249c6901eaec4a0eadf93b91e59234138ac17a" Jan 26 00:25:42 crc kubenswrapper[5124]: I0126 00:25:42.730489 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_907f0bcb-9b75-4ab6-b721-88558878d13b/docker-build/0.log" Jan 26 00:25:42 crc kubenswrapper[5124]: I0126 00:25:42.732362 5124 generic.go:358] "Generic (PLEG): container finished" podID="907f0bcb-9b75-4ab6-b721-88558878d13b" containerID="7e0d6ae6ccf88bdd5ab146811f43ec30e6802e3a78b4de9baa937317b66e0c9c" exitCode=1 Jan 26 00:25:42 crc kubenswrapper[5124]: I0126 00:25:42.732466 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"907f0bcb-9b75-4ab6-b721-88558878d13b","Type":"ContainerDied","Data":"7e0d6ae6ccf88bdd5ab146811f43ec30e6802e3a78b4de9baa937317b66e0c9c"} Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.018106 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_907f0bcb-9b75-4ab6-b721-88558878d13b/docker-build/0.log" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.019325 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.131084 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-system-configs\") pod \"907f0bcb-9b75-4ab6-b721-88558878d13b\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.131180 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-buildworkdir\") pod \"907f0bcb-9b75-4ab6-b721-88558878d13b\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.131209 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-container-storage-run\") pod \"907f0bcb-9b75-4ab6-b721-88558878d13b\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.131239 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/907f0bcb-9b75-4ab6-b721-88558878d13b-builder-dockercfg-cbnx8-push\") pod \"907f0bcb-9b75-4ab6-b721-88558878d13b\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.131289 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/907f0bcb-9b75-4ab6-b721-88558878d13b-node-pullsecrets\") pod \"907f0bcb-9b75-4ab6-b721-88558878d13b\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.131340 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-container-storage-root\") pod \"907f0bcb-9b75-4ab6-b721-88558878d13b\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.131363 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-ca-bundles\") pod \"907f0bcb-9b75-4ab6-b721-88558878d13b\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.131415 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/907f0bcb-9b75-4ab6-b721-88558878d13b-buildcachedir\") pod \"907f0bcb-9b75-4ab6-b721-88558878d13b\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.131455 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/907f0bcb-9b75-4ab6-b721-88558878d13b-builder-dockercfg-cbnx8-pull\") pod \"907f0bcb-9b75-4ab6-b721-88558878d13b\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.131484 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfc6l\" (UniqueName: \"kubernetes.io/projected/907f0bcb-9b75-4ab6-b721-88558878d13b-kube-api-access-dfc6l\") pod \"907f0bcb-9b75-4ab6-b721-88558878d13b\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.131507 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-build-blob-cache\") pod \"907f0bcb-9b75-4ab6-b721-88558878d13b\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.131537 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-proxy-ca-bundles\") pod \"907f0bcb-9b75-4ab6-b721-88558878d13b\" (UID: \"907f0bcb-9b75-4ab6-b721-88558878d13b\") " Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.132017 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/907f0bcb-9b75-4ab6-b721-88558878d13b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "907f0bcb-9b75-4ab6-b721-88558878d13b" (UID: "907f0bcb-9b75-4ab6-b721-88558878d13b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.132464 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/907f0bcb-9b75-4ab6-b721-88558878d13b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "907f0bcb-9b75-4ab6-b721-88558878d13b" (UID: "907f0bcb-9b75-4ab6-b721-88558878d13b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.133063 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "907f0bcb-9b75-4ab6-b721-88558878d13b" (UID: "907f0bcb-9b75-4ab6-b721-88558878d13b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.133174 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "907f0bcb-9b75-4ab6-b721-88558878d13b" (UID: "907f0bcb-9b75-4ab6-b721-88558878d13b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.133326 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "907f0bcb-9b75-4ab6-b721-88558878d13b" (UID: "907f0bcb-9b75-4ab6-b721-88558878d13b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.134208 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "907f0bcb-9b75-4ab6-b721-88558878d13b" (UID: "907f0bcb-9b75-4ab6-b721-88558878d13b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.137762 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907f0bcb-9b75-4ab6-b721-88558878d13b-builder-dockercfg-cbnx8-pull" (OuterVolumeSpecName: "builder-dockercfg-cbnx8-pull") pod "907f0bcb-9b75-4ab6-b721-88558878d13b" (UID: "907f0bcb-9b75-4ab6-b721-88558878d13b"). InnerVolumeSpecName "builder-dockercfg-cbnx8-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.137771 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907f0bcb-9b75-4ab6-b721-88558878d13b-kube-api-access-dfc6l" (OuterVolumeSpecName: "kube-api-access-dfc6l") pod "907f0bcb-9b75-4ab6-b721-88558878d13b" (UID: "907f0bcb-9b75-4ab6-b721-88558878d13b"). InnerVolumeSpecName "kube-api-access-dfc6l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.137777 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907f0bcb-9b75-4ab6-b721-88558878d13b-builder-dockercfg-cbnx8-push" (OuterVolumeSpecName: "builder-dockercfg-cbnx8-push") pod "907f0bcb-9b75-4ab6-b721-88558878d13b" (UID: "907f0bcb-9b75-4ab6-b721-88558878d13b"). InnerVolumeSpecName "builder-dockercfg-cbnx8-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.166400 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "907f0bcb-9b75-4ab6-b721-88558878d13b" (UID: "907f0bcb-9b75-4ab6-b721-88558878d13b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.233254 5124 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.233304 5124 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.233319 5124 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/907f0bcb-9b75-4ab6-b721-88558878d13b-builder-dockercfg-cbnx8-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.233333 5124 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/907f0bcb-9b75-4ab6-b721-88558878d13b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.233348 5124 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.233360 5124 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/907f0bcb-9b75-4ab6-b721-88558878d13b-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.233372 5124 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/907f0bcb-9b75-4ab6-b721-88558878d13b-builder-dockercfg-cbnx8-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.233384 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dfc6l\" (UniqueName: \"kubernetes.io/projected/907f0bcb-9b75-4ab6-b721-88558878d13b-kube-api-access-dfc6l\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.233395 5124 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.233407 5124 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/907f0bcb-9b75-4ab6-b721-88558878d13b-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.320901 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "907f0bcb-9b75-4ab6-b721-88558878d13b" (UID: "907f0bcb-9b75-4ab6-b721-88558878d13b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.334757 5124 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.745849 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_907f0bcb-9b75-4ab6-b721-88558878d13b/docker-build/0.log" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.746876 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"907f0bcb-9b75-4ab6-b721-88558878d13b","Type":"ContainerDied","Data":"07988d15347047b5612b632820fbad4abfb894efbd45686e6d08f43696a914f6"} Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.746895 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:44 crc kubenswrapper[5124]: I0126 00:25:44.746916 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07988d15347047b5612b632820fbad4abfb894efbd45686e6d08f43696a914f6" Jan 26 00:25:46 crc kubenswrapper[5124]: I0126 00:25:46.188900 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "907f0bcb-9b75-4ab6-b721-88558878d13b" (UID: "907f0bcb-9b75-4ab6-b721-88558878d13b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:46 crc kubenswrapper[5124]: I0126 00:25:46.260220 5124 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/907f0bcb-9b75-4ab6-b721-88558878d13b-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.690063 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.692897 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="907f0bcb-9b75-4ab6-b721-88558878d13b" containerName="git-clone" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.693025 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="907f0bcb-9b75-4ab6-b721-88558878d13b" containerName="git-clone" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.693131 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="907f0bcb-9b75-4ab6-b721-88558878d13b" containerName="docker-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.693212 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="907f0bcb-9b75-4ab6-b721-88558878d13b" containerName="docker-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.693294 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="907f0bcb-9b75-4ab6-b721-88558878d13b" containerName="manage-dockerfile" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.693375 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="907f0bcb-9b75-4ab6-b721-88558878d13b" containerName="manage-dockerfile" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.693572 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="907f0bcb-9b75-4ab6-b721-88558878d13b" containerName="docker-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.706697 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.709740 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-global-ca\"" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.709970 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-sys-config\"" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.710756 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-cbnx8\"" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.713562 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-ca\"" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.715682 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.777372 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.777423 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.777456 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.777512 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.777623 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.777702 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.777736 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.777805 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.777834 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.777882 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.777961 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.777990 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hmzf\" (UniqueName: \"kubernetes.io/projected/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-kube-api-access-7hmzf\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879223 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879264 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879287 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879304 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879340 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879378 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879661 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879754 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879855 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879867 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879868 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879954 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879977 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.879995 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.880039 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.880060 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7hmzf\" (UniqueName: \"kubernetes.io/projected/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-kube-api-access-7hmzf\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.880152 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.880316 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.880385 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.880390 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.880988 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.890762 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.891656 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:54 crc kubenswrapper[5124]: I0126 00:25:54.920511 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hmzf\" (UniqueName: \"kubernetes.io/projected/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-kube-api-access-7hmzf\") pod \"service-telemetry-operator-4-build\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:55 crc kubenswrapper[5124]: I0126 00:25:55.026036 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:25:55 crc kubenswrapper[5124]: I0126 00:25:55.254679 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:25:55 crc kubenswrapper[5124]: I0126 00:25:55.260620 5124 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:25:55 crc kubenswrapper[5124]: I0126 00:25:55.827559 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"b7aad826-6989-4e26-bc04-f2d00bd4b0fa","Type":"ContainerStarted","Data":"b5f2d521dda4175545078728213beb87886c3869f55d9b6e2274d1f27a0e87c9"} Jan 26 00:25:55 crc kubenswrapper[5124]: I0126 00:25:55.827649 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"b7aad826-6989-4e26-bc04-f2d00bd4b0fa","Type":"ContainerStarted","Data":"5191b5e5912c63c040bce1e5f8fe14f5d8e9e2f0a859e9fc17a80625e047e7e3"} Jan 26 00:26:00 crc kubenswrapper[5124]: I0126 00:26:00.135829 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489786-46cnn"] Jan 26 00:26:00 crc kubenswrapper[5124]: I0126 00:26:00.156806 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-46cnn"] Jan 26 00:26:00 crc kubenswrapper[5124]: I0126 00:26:00.156944 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-46cnn" Jan 26 00:26:00 crc kubenswrapper[5124]: I0126 00:26:00.159696 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:26:00 crc kubenswrapper[5124]: I0126 00:26:00.160053 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:26:00 crc kubenswrapper[5124]: I0126 00:26:00.160214 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-26tfw\"" Jan 26 00:26:00 crc kubenswrapper[5124]: I0126 00:26:00.252770 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-865hn\" (UniqueName: \"kubernetes.io/projected/bb0161f4-0739-4dad-b0fb-cb065fec2d03-kube-api-access-865hn\") pod \"auto-csr-approver-29489786-46cnn\" (UID: \"bb0161f4-0739-4dad-b0fb-cb065fec2d03\") " pod="openshift-infra/auto-csr-approver-29489786-46cnn" Jan 26 00:26:00 crc kubenswrapper[5124]: I0126 00:26:00.354175 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-865hn\" (UniqueName: \"kubernetes.io/projected/bb0161f4-0739-4dad-b0fb-cb065fec2d03-kube-api-access-865hn\") pod \"auto-csr-approver-29489786-46cnn\" (UID: \"bb0161f4-0739-4dad-b0fb-cb065fec2d03\") " pod="openshift-infra/auto-csr-approver-29489786-46cnn" Jan 26 00:26:00 crc kubenswrapper[5124]: I0126 00:26:00.379157 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-865hn\" (UniqueName: \"kubernetes.io/projected/bb0161f4-0739-4dad-b0fb-cb065fec2d03-kube-api-access-865hn\") pod \"auto-csr-approver-29489786-46cnn\" (UID: \"bb0161f4-0739-4dad-b0fb-cb065fec2d03\") " pod="openshift-infra/auto-csr-approver-29489786-46cnn" Jan 26 00:26:00 crc kubenswrapper[5124]: I0126 00:26:00.475299 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-46cnn" Jan 26 00:26:00 crc kubenswrapper[5124]: I0126 00:26:00.692603 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-46cnn"] Jan 26 00:26:00 crc kubenswrapper[5124]: I0126 00:26:00.866865 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-46cnn" event={"ID":"bb0161f4-0739-4dad-b0fb-cb065fec2d03","Type":"ContainerStarted","Data":"023da41cff60e22e22b74e35e1e4ed74695783e271dd3650b4ed739cca576f99"} Jan 26 00:26:02 crc kubenswrapper[5124]: I0126 00:26:02.902003 5124 generic.go:358] "Generic (PLEG): container finished" podID="bb0161f4-0739-4dad-b0fb-cb065fec2d03" containerID="0df3319170851245e973cf4100474630f30023e31dfdd3766cd0e08dedb142e2" exitCode=0 Jan 26 00:26:02 crc kubenswrapper[5124]: I0126 00:26:02.902062 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-46cnn" event={"ID":"bb0161f4-0739-4dad-b0fb-cb065fec2d03","Type":"ContainerDied","Data":"0df3319170851245e973cf4100474630f30023e31dfdd3766cd0e08dedb142e2"} Jan 26 00:26:02 crc kubenswrapper[5124]: I0126 00:26:02.903866 5124 generic.go:358] "Generic (PLEG): container finished" podID="b7aad826-6989-4e26-bc04-f2d00bd4b0fa" containerID="b5f2d521dda4175545078728213beb87886c3869f55d9b6e2274d1f27a0e87c9" exitCode=0 Jan 26 00:26:02 crc kubenswrapper[5124]: I0126 00:26:02.903945 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"b7aad826-6989-4e26-bc04-f2d00bd4b0fa","Type":"ContainerDied","Data":"b5f2d521dda4175545078728213beb87886c3869f55d9b6e2274d1f27a0e87c9"} Jan 26 00:26:03 crc kubenswrapper[5124]: I0126 00:26:03.911834 5124 generic.go:358] "Generic (PLEG): container finished" podID="b7aad826-6989-4e26-bc04-f2d00bd4b0fa" containerID="3788a1822bbf32c1431068f315ebd1f1f0e206acbb6e70fd32a8929f6dd997ae" exitCode=0 Jan 26 00:26:03 crc kubenswrapper[5124]: I0126 00:26:03.911922 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"b7aad826-6989-4e26-bc04-f2d00bd4b0fa","Type":"ContainerDied","Data":"3788a1822bbf32c1431068f315ebd1f1f0e206acbb6e70fd32a8929f6dd997ae"} Jan 26 00:26:03 crc kubenswrapper[5124]: I0126 00:26:03.952734 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_b7aad826-6989-4e26-bc04-f2d00bd4b0fa/manage-dockerfile/0.log" Jan 26 00:26:04 crc kubenswrapper[5124]: I0126 00:26:04.217981 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-46cnn" Jan 26 00:26:04 crc kubenswrapper[5124]: I0126 00:26:04.309661 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-865hn\" (UniqueName: \"kubernetes.io/projected/bb0161f4-0739-4dad-b0fb-cb065fec2d03-kube-api-access-865hn\") pod \"bb0161f4-0739-4dad-b0fb-cb065fec2d03\" (UID: \"bb0161f4-0739-4dad-b0fb-cb065fec2d03\") " Jan 26 00:26:04 crc kubenswrapper[5124]: I0126 00:26:04.316751 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0161f4-0739-4dad-b0fb-cb065fec2d03-kube-api-access-865hn" (OuterVolumeSpecName: "kube-api-access-865hn") pod "bb0161f4-0739-4dad-b0fb-cb065fec2d03" (UID: "bb0161f4-0739-4dad-b0fb-cb065fec2d03"). InnerVolumeSpecName "kube-api-access-865hn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:26:04 crc kubenswrapper[5124]: I0126 00:26:04.411809 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-865hn\" (UniqueName: \"kubernetes.io/projected/bb0161f4-0739-4dad-b0fb-cb065fec2d03-kube-api-access-865hn\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:04 crc kubenswrapper[5124]: I0126 00:26:04.920214 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-46cnn" Jan 26 00:26:04 crc kubenswrapper[5124]: I0126 00:26:04.920240 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-46cnn" event={"ID":"bb0161f4-0739-4dad-b0fb-cb065fec2d03","Type":"ContainerDied","Data":"023da41cff60e22e22b74e35e1e4ed74695783e271dd3650b4ed739cca576f99"} Jan 26 00:26:04 crc kubenswrapper[5124]: I0126 00:26:04.920278 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="023da41cff60e22e22b74e35e1e4ed74695783e271dd3650b4ed739cca576f99" Jan 26 00:26:04 crc kubenswrapper[5124]: I0126 00:26:04.923360 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"b7aad826-6989-4e26-bc04-f2d00bd4b0fa","Type":"ContainerStarted","Data":"a2a60d24b31e99b1436e962b89edc25a9bf2f3d8d7d5dc6b27f89cd35364de44"} Jan 26 00:26:04 crc kubenswrapper[5124]: I0126 00:26:04.953426 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-4-build" podStartSLOduration=10.953409466 podStartE2EDuration="10.953409466s" podCreationTimestamp="2026-01-26 00:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:26:04.948495213 +0000 UTC m=+1042.857414602" watchObservedRunningTime="2026-01-26 00:26:04.953409466 +0000 UTC m=+1042.862328815" Jan 26 00:26:05 crc kubenswrapper[5124]: I0126 00:26:05.272231 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-qvnlc"] Jan 26 00:26:05 crc kubenswrapper[5124]: I0126 00:26:05.279030 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-qvnlc"] Jan 26 00:26:06 crc kubenswrapper[5124]: I0126 00:26:06.372157 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7cbfe39-767f-4343-96cd-cda76678d60c" path="/var/lib/kubelet/pods/c7cbfe39-767f-4343-96cd-cda76678d60c/volumes" Jan 26 00:26:40 crc kubenswrapper[5124]: I0126 00:26:40.830989 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:26:40 crc kubenswrapper[5124]: I0126 00:26:40.831814 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:26:52 crc kubenswrapper[5124]: I0126 00:26:52.191995 5124 scope.go:117] "RemoveContainer" containerID="a6cc4c7c30d62521c22daa2e1c43e9bab237c5a29aa4c1e42b8e975ba4af144b" Jan 26 00:27:10 crc kubenswrapper[5124]: I0126 00:27:10.830063 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:27:10 crc kubenswrapper[5124]: I0126 00:27:10.830742 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:27:14 crc kubenswrapper[5124]: I0126 00:27:14.762634 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_b7aad826-6989-4e26-bc04-f2d00bd4b0fa/docker-build/0.log" Jan 26 00:27:14 crc kubenswrapper[5124]: I0126 00:27:14.763637 5124 generic.go:358] "Generic (PLEG): container finished" podID="b7aad826-6989-4e26-bc04-f2d00bd4b0fa" containerID="a2a60d24b31e99b1436e962b89edc25a9bf2f3d8d7d5dc6b27f89cd35364de44" exitCode=1 Jan 26 00:27:14 crc kubenswrapper[5124]: I0126 00:27:14.763694 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"b7aad826-6989-4e26-bc04-f2d00bd4b0fa","Type":"ContainerDied","Data":"a2a60d24b31e99b1436e962b89edc25a9bf2f3d8d7d5dc6b27f89cd35364de44"} Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.070229 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_b7aad826-6989-4e26-bc04-f2d00bd4b0fa/docker-build/0.log" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.071373 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.154037 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-proxy-ca-bundles\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.154068 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-system-configs\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.154099 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-builder-dockercfg-cbnx8-push\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.154135 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-builder-dockercfg-cbnx8-pull\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.154153 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-blob-cache\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.154575 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.154775 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155052 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-container-storage-run\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155084 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-node-pullsecrets\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155140 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-ca-bundles\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155188 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155215 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-buildworkdir\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155240 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hmzf\" (UniqueName: \"kubernetes.io/projected/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-kube-api-access-7hmzf\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155274 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-container-storage-root\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155293 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-buildcachedir\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155541 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155610 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155712 5124 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155738 5124 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155749 5124 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155758 5124 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.155766 5124 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.156033 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.161653 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-builder-dockercfg-cbnx8-pull" (OuterVolumeSpecName: "builder-dockercfg-cbnx8-pull") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "builder-dockercfg-cbnx8-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.161751 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-kube-api-access-7hmzf" (OuterVolumeSpecName: "kube-api-access-7hmzf") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "kube-api-access-7hmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.161735 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-builder-dockercfg-cbnx8-push" (OuterVolumeSpecName: "builder-dockercfg-cbnx8-push") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "builder-dockercfg-cbnx8-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.188754 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.257412 5124 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-builder-dockercfg-cbnx8-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.257457 5124 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-builder-dockercfg-cbnx8-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.257470 5124 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.257482 5124 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.257495 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7hmzf\" (UniqueName: \"kubernetes.io/projected/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-kube-api-access-7hmzf\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.357526 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.358254 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-blob-cache\") pod \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\" (UID: \"b7aad826-6989-4e26-bc04-f2d00bd4b0fa\") " Jan 26 00:27:16 crc kubenswrapper[5124]: W0126 00:27:16.358339 5124 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b7aad826-6989-4e26-bc04-f2d00bd4b0fa/volumes/kubernetes.io~empty-dir/build-blob-cache Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.358376 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.359083 5124 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.779458 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_b7aad826-6989-4e26-bc04-f2d00bd4b0fa/docker-build/0.log" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.781234 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"b7aad826-6989-4e26-bc04-f2d00bd4b0fa","Type":"ContainerDied","Data":"5191b5e5912c63c040bce1e5f8fe14f5d8e9e2f0a859e9fc17a80625e047e7e3"} Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.781268 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:16 crc kubenswrapper[5124]: I0126 00:27:16.781284 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5191b5e5912c63c040bce1e5f8fe14f5d8e9e2f0a859e9fc17a80625e047e7e3" Jan 26 00:27:17 crc kubenswrapper[5124]: I0126 00:27:17.934614 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b7aad826-6989-4e26-bc04-f2d00bd4b0fa" (UID: "b7aad826-6989-4e26-bc04-f2d00bd4b0fa"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:17 crc kubenswrapper[5124]: I0126 00:27:17.980213 5124 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b7aad826-6989-4e26-bc04-f2d00bd4b0fa-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.195145 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.196836 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b7aad826-6989-4e26-bc04-f2d00bd4b0fa" containerName="docker-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.196859 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7aad826-6989-4e26-bc04-f2d00bd4b0fa" containerName="docker-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.196882 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bb0161f4-0739-4dad-b0fb-cb065fec2d03" containerName="oc" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.196895 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0161f4-0739-4dad-b0fb-cb065fec2d03" containerName="oc" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.196928 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b7aad826-6989-4e26-bc04-f2d00bd4b0fa" containerName="manage-dockerfile" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.196942 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7aad826-6989-4e26-bc04-f2d00bd4b0fa" containerName="manage-dockerfile" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.196964 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b7aad826-6989-4e26-bc04-f2d00bd4b0fa" containerName="git-clone" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.196978 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7aad826-6989-4e26-bc04-f2d00bd4b0fa" containerName="git-clone" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.197140 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="b7aad826-6989-4e26-bc04-f2d00bd4b0fa" containerName="docker-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.197170 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="bb0161f4-0739-4dad-b0fb-cb065fec2d03" containerName="oc" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.235258 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.235400 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.239289 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-cbnx8\"" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.239556 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-sys-config\"" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.239664 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-ca\"" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.239737 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-global-ca\"" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.309073 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.309116 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.309135 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.309158 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.309174 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.309205 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.309363 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5cdv\" (UniqueName: \"kubernetes.io/projected/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-kube-api-access-m5cdv\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.309704 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.309796 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.309850 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.309933 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.309976 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.411919 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.412054 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.412289 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.412373 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.412426 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.412473 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.412513 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.412532 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.412547 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.412669 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.412704 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.412810 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.412860 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m5cdv\" (UniqueName: \"kubernetes.io/projected/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-kube-api-access-m5cdv\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.413127 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.413133 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.413232 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.413401 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.413530 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.413975 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.414582 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.415137 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.421235 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-builder-dockercfg-cbnx8-push\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.422301 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-builder-dockercfg-cbnx8-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.436191 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5cdv\" (UniqueName: \"kubernetes.io/projected/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-kube-api-access-m5cdv\") pod \"service-telemetry-operator-5-build\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:27 crc kubenswrapper[5124]: I0126 00:27:27.569304 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:27:28 crc kubenswrapper[5124]: I0126 00:27:28.071626 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:27:28 crc kubenswrapper[5124]: I0126 00:27:28.861684 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"c36c92a4-fe0a-4de5-9c43-bea3a04112e8","Type":"ContainerStarted","Data":"053d3aff36315fe2a1a3a1573aa752e7ad89dff48b572f526be9d0ba39b9dc07"} Jan 26 00:27:28 crc kubenswrapper[5124]: I0126 00:27:28.861811 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"c36c92a4-fe0a-4de5-9c43-bea3a04112e8","Type":"ContainerStarted","Data":"777a6876d3cde325ceaa131dd493ec7a08538af4e3abc6b5a9907b18ad563591"} Jan 26 00:27:36 crc kubenswrapper[5124]: E0126 00:27:36.045448 5124 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc36c92a4_fe0a_4de5_9c43_bea3a04112e8.slice/crio-053d3aff36315fe2a1a3a1573aa752e7ad89dff48b572f526be9d0ba39b9dc07.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc36c92a4_fe0a_4de5_9c43_bea3a04112e8.slice/crio-conmon-053d3aff36315fe2a1a3a1573aa752e7ad89dff48b572f526be9d0ba39b9dc07.scope\": RecentStats: unable to find data in memory cache]" Jan 26 00:27:36 crc kubenswrapper[5124]: I0126 00:27:36.918471 5124 generic.go:358] "Generic (PLEG): container finished" podID="c36c92a4-fe0a-4de5-9c43-bea3a04112e8" containerID="053d3aff36315fe2a1a3a1573aa752e7ad89dff48b572f526be9d0ba39b9dc07" exitCode=0 Jan 26 00:27:36 crc kubenswrapper[5124]: I0126 00:27:36.918522 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"c36c92a4-fe0a-4de5-9c43-bea3a04112e8","Type":"ContainerDied","Data":"053d3aff36315fe2a1a3a1573aa752e7ad89dff48b572f526be9d0ba39b9dc07"} Jan 26 00:27:37 crc kubenswrapper[5124]: I0126 00:27:37.928437 5124 generic.go:358] "Generic (PLEG): container finished" podID="c36c92a4-fe0a-4de5-9c43-bea3a04112e8" containerID="bd7fd1b745eb3d679227c558721d8a193fb7f1af8c24c685973cd561cd418c12" exitCode=0 Jan 26 00:27:37 crc kubenswrapper[5124]: I0126 00:27:37.929126 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"c36c92a4-fe0a-4de5-9c43-bea3a04112e8","Type":"ContainerDied","Data":"bd7fd1b745eb3d679227c558721d8a193fb7f1af8c24c685973cd561cd418c12"} Jan 26 00:27:37 crc kubenswrapper[5124]: I0126 00:27:37.986514 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_c36c92a4-fe0a-4de5-9c43-bea3a04112e8/manage-dockerfile/0.log" Jan 26 00:27:38 crc kubenswrapper[5124]: I0126 00:27:38.936532 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"c36c92a4-fe0a-4de5-9c43-bea3a04112e8","Type":"ContainerStarted","Data":"3cf7d964e9d975efc1340e86d1775ba6d54d2fd489f296f7b2ae968cf5e5c9e7"} Jan 26 00:27:38 crc kubenswrapper[5124]: I0126 00:27:38.964262 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-5-build" podStartSLOduration=11.964244668 podStartE2EDuration="11.964244668s" podCreationTimestamp="2026-01-26 00:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:27:38.961215106 +0000 UTC m=+1136.870134455" watchObservedRunningTime="2026-01-26 00:27:38.964244668 +0000 UTC m=+1136.873164017" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.270863 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sljgw"] Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.284239 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.308719 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sljgw"] Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.309884 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee45aa6-eb87-4f96-a4ac-68fb27808e96-utilities\") pod \"community-operators-sljgw\" (UID: \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\") " pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.309960 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d5np\" (UniqueName: \"kubernetes.io/projected/bee45aa6-eb87-4f96-a4ac-68fb27808e96-kube-api-access-9d5np\") pod \"community-operators-sljgw\" (UID: \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\") " pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.310064 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee45aa6-eb87-4f96-a4ac-68fb27808e96-catalog-content\") pod \"community-operators-sljgw\" (UID: \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\") " pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.412262 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee45aa6-eb87-4f96-a4ac-68fb27808e96-catalog-content\") pod \"community-operators-sljgw\" (UID: \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\") " pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.411690 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee45aa6-eb87-4f96-a4ac-68fb27808e96-catalog-content\") pod \"community-operators-sljgw\" (UID: \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\") " pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.412411 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee45aa6-eb87-4f96-a4ac-68fb27808e96-utilities\") pod \"community-operators-sljgw\" (UID: \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\") " pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.412692 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee45aa6-eb87-4f96-a4ac-68fb27808e96-utilities\") pod \"community-operators-sljgw\" (UID: \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\") " pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.412754 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9d5np\" (UniqueName: \"kubernetes.io/projected/bee45aa6-eb87-4f96-a4ac-68fb27808e96-kube-api-access-9d5np\") pod \"community-operators-sljgw\" (UID: \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\") " pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.434496 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d5np\" (UniqueName: \"kubernetes.io/projected/bee45aa6-eb87-4f96-a4ac-68fb27808e96-kube-api-access-9d5np\") pod \"community-operators-sljgw\" (UID: \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\") " pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.611295 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.831149 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.831679 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.831749 5124 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.832419 5124 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4364654e7244afc307256d5ab68b10d1fea1b2d37b15d2d92ab4bb0d2fa9068"} pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:27:40 crc kubenswrapper[5124]: I0126 00:27:40.832485 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" containerID="cri-o://e4364654e7244afc307256d5ab68b10d1fea1b2d37b15d2d92ab4bb0d2fa9068" gracePeriod=600 Jan 26 00:27:41 crc kubenswrapper[5124]: I0126 00:27:41.129456 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sljgw"] Jan 26 00:27:41 crc kubenswrapper[5124]: W0126 00:27:41.141486 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbee45aa6_eb87_4f96_a4ac_68fb27808e96.slice/crio-b9607523e8802bc1be510909ff3a6fbf18614b7ebccfce15480d8008ca136f68 WatchSource:0}: Error finding container b9607523e8802bc1be510909ff3a6fbf18614b7ebccfce15480d8008ca136f68: Status 404 returned error can't find the container with id b9607523e8802bc1be510909ff3a6fbf18614b7ebccfce15480d8008ca136f68 Jan 26 00:27:41 crc kubenswrapper[5124]: I0126 00:27:41.968803 5124 generic.go:358] "Generic (PLEG): container finished" podID="bee45aa6-eb87-4f96-a4ac-68fb27808e96" containerID="c0fa00bdb6aca53ef082ddcebc9ae78fb9a67a8144b98d8e9e0665eca1774a22" exitCode=0 Jan 26 00:27:41 crc kubenswrapper[5124]: I0126 00:27:41.968892 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sljgw" event={"ID":"bee45aa6-eb87-4f96-a4ac-68fb27808e96","Type":"ContainerDied","Data":"c0fa00bdb6aca53ef082ddcebc9ae78fb9a67a8144b98d8e9e0665eca1774a22"} Jan 26 00:27:41 crc kubenswrapper[5124]: I0126 00:27:41.969549 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sljgw" event={"ID":"bee45aa6-eb87-4f96-a4ac-68fb27808e96","Type":"ContainerStarted","Data":"b9607523e8802bc1be510909ff3a6fbf18614b7ebccfce15480d8008ca136f68"} Jan 26 00:27:41 crc kubenswrapper[5124]: I0126 00:27:41.974398 5124 generic.go:358] "Generic (PLEG): container finished" podID="95fa0656-150a-4d93-a324-77a1306d91f7" containerID="e4364654e7244afc307256d5ab68b10d1fea1b2d37b15d2d92ab4bb0d2fa9068" exitCode=0 Jan 26 00:27:41 crc kubenswrapper[5124]: I0126 00:27:41.974530 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerDied","Data":"e4364654e7244afc307256d5ab68b10d1fea1b2d37b15d2d92ab4bb0d2fa9068"} Jan 26 00:27:41 crc kubenswrapper[5124]: I0126 00:27:41.974555 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerStarted","Data":"82c9cf1ec6062ea01a6d002676f82275bc429fe3760dae651fc24fe679ab62b5"} Jan 26 00:27:41 crc kubenswrapper[5124]: I0126 00:27:41.974574 5124 scope.go:117] "RemoveContainer" containerID="79635baa3ffeb5e4c69b5bd5a6a7d2d5fea58437cda8cef86f8317b3f38ad143" Jan 26 00:27:44 crc kubenswrapper[5124]: I0126 00:27:44.000534 5124 generic.go:358] "Generic (PLEG): container finished" podID="bee45aa6-eb87-4f96-a4ac-68fb27808e96" containerID="b8be0b549a56e0ba67b26e71ed2e1b52f8e53c9903df3a3b64a7a215806be8c6" exitCode=0 Jan 26 00:27:44 crc kubenswrapper[5124]: I0126 00:27:44.000674 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sljgw" event={"ID":"bee45aa6-eb87-4f96-a4ac-68fb27808e96","Type":"ContainerDied","Data":"b8be0b549a56e0ba67b26e71ed2e1b52f8e53c9903df3a3b64a7a215806be8c6"} Jan 26 00:27:45 crc kubenswrapper[5124]: I0126 00:27:45.010949 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sljgw" event={"ID":"bee45aa6-eb87-4f96-a4ac-68fb27808e96","Type":"ContainerStarted","Data":"86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3"} Jan 26 00:27:45 crc kubenswrapper[5124]: I0126 00:27:45.035604 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sljgw" podStartSLOduration=4.211524253 podStartE2EDuration="5.03557342s" podCreationTimestamp="2026-01-26 00:27:40 +0000 UTC" firstStartedPulling="2026-01-26 00:27:41.969577084 +0000 UTC m=+1139.878496433" lastFinishedPulling="2026-01-26 00:27:42.793626231 +0000 UTC m=+1140.702545600" observedRunningTime="2026-01-26 00:27:45.03485153 +0000 UTC m=+1142.943770879" watchObservedRunningTime="2026-01-26 00:27:45.03557342 +0000 UTC m=+1142.944492769" Jan 26 00:27:50 crc kubenswrapper[5124]: I0126 00:27:50.612343 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:50 crc kubenswrapper[5124]: I0126 00:27:50.612873 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:50 crc kubenswrapper[5124]: I0126 00:27:50.655692 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:51 crc kubenswrapper[5124]: I0126 00:27:51.088518 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:52 crc kubenswrapper[5124]: I0126 00:27:52.060790 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sljgw"] Jan 26 00:27:53 crc kubenswrapper[5124]: I0126 00:27:53.064840 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sljgw" podUID="bee45aa6-eb87-4f96-a4ac-68fb27808e96" containerName="registry-server" containerID="cri-o://86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3" gracePeriod=2 Jan 26 00:27:53 crc kubenswrapper[5124]: I0126 00:27:53.466761 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:53 crc kubenswrapper[5124]: I0126 00:27:53.630614 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee45aa6-eb87-4f96-a4ac-68fb27808e96-utilities\") pod \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\" (UID: \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\") " Jan 26 00:27:53 crc kubenswrapper[5124]: I0126 00:27:53.630705 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d5np\" (UniqueName: \"kubernetes.io/projected/bee45aa6-eb87-4f96-a4ac-68fb27808e96-kube-api-access-9d5np\") pod \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\" (UID: \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\") " Jan 26 00:27:53 crc kubenswrapper[5124]: I0126 00:27:53.630855 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee45aa6-eb87-4f96-a4ac-68fb27808e96-catalog-content\") pod \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\" (UID: \"bee45aa6-eb87-4f96-a4ac-68fb27808e96\") " Jan 26 00:27:53 crc kubenswrapper[5124]: I0126 00:27:53.631481 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bee45aa6-eb87-4f96-a4ac-68fb27808e96-utilities" (OuterVolumeSpecName: "utilities") pod "bee45aa6-eb87-4f96-a4ac-68fb27808e96" (UID: "bee45aa6-eb87-4f96-a4ac-68fb27808e96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:53 crc kubenswrapper[5124]: I0126 00:27:53.641458 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee45aa6-eb87-4f96-a4ac-68fb27808e96-kube-api-access-9d5np" (OuterVolumeSpecName: "kube-api-access-9d5np") pod "bee45aa6-eb87-4f96-a4ac-68fb27808e96" (UID: "bee45aa6-eb87-4f96-a4ac-68fb27808e96"). InnerVolumeSpecName "kube-api-access-9d5np". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:27:53 crc kubenswrapper[5124]: I0126 00:27:53.678452 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bee45aa6-eb87-4f96-a4ac-68fb27808e96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bee45aa6-eb87-4f96-a4ac-68fb27808e96" (UID: "bee45aa6-eb87-4f96-a4ac-68fb27808e96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:53 crc kubenswrapper[5124]: I0126 00:27:53.732468 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee45aa6-eb87-4f96-a4ac-68fb27808e96-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:53 crc kubenswrapper[5124]: I0126 00:27:53.732506 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee45aa6-eb87-4f96-a4ac-68fb27808e96-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:53 crc kubenswrapper[5124]: I0126 00:27:53.732516 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9d5np\" (UniqueName: \"kubernetes.io/projected/bee45aa6-eb87-4f96-a4ac-68fb27808e96-kube-api-access-9d5np\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.071205 5124 generic.go:358] "Generic (PLEG): container finished" podID="bee45aa6-eb87-4f96-a4ac-68fb27808e96" containerID="86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3" exitCode=0 Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.071454 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sljgw" event={"ID":"bee45aa6-eb87-4f96-a4ac-68fb27808e96","Type":"ContainerDied","Data":"86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3"} Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.071487 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sljgw" event={"ID":"bee45aa6-eb87-4f96-a4ac-68fb27808e96","Type":"ContainerDied","Data":"b9607523e8802bc1be510909ff3a6fbf18614b7ebccfce15480d8008ca136f68"} Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.071511 5124 scope.go:117] "RemoveContainer" containerID="86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3" Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.071577 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sljgw" Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.087902 5124 scope.go:117] "RemoveContainer" containerID="b8be0b549a56e0ba67b26e71ed2e1b52f8e53c9903df3a3b64a7a215806be8c6" Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.096869 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sljgw"] Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.104316 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sljgw"] Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.115446 5124 scope.go:117] "RemoveContainer" containerID="c0fa00bdb6aca53ef082ddcebc9ae78fb9a67a8144b98d8e9e0665eca1774a22" Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.130618 5124 scope.go:117] "RemoveContainer" containerID="86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3" Jan 26 00:27:54 crc kubenswrapper[5124]: E0126 00:27:54.130943 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3\": container with ID starting with 86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3 not found: ID does not exist" containerID="86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3" Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.130972 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3"} err="failed to get container status \"86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3\": rpc error: code = NotFound desc = could not find container \"86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3\": container with ID starting with 86e1da615717a7f0f6367aabe59ee6260234d18394daa53d795eb3f9f6a907c3 not found: ID does not exist" Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.130992 5124 scope.go:117] "RemoveContainer" containerID="b8be0b549a56e0ba67b26e71ed2e1b52f8e53c9903df3a3b64a7a215806be8c6" Jan 26 00:27:54 crc kubenswrapper[5124]: E0126 00:27:54.131171 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8be0b549a56e0ba67b26e71ed2e1b52f8e53c9903df3a3b64a7a215806be8c6\": container with ID starting with b8be0b549a56e0ba67b26e71ed2e1b52f8e53c9903df3a3b64a7a215806be8c6 not found: ID does not exist" containerID="b8be0b549a56e0ba67b26e71ed2e1b52f8e53c9903df3a3b64a7a215806be8c6" Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.131192 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8be0b549a56e0ba67b26e71ed2e1b52f8e53c9903df3a3b64a7a215806be8c6"} err="failed to get container status \"b8be0b549a56e0ba67b26e71ed2e1b52f8e53c9903df3a3b64a7a215806be8c6\": rpc error: code = NotFound desc = could not find container \"b8be0b549a56e0ba67b26e71ed2e1b52f8e53c9903df3a3b64a7a215806be8c6\": container with ID starting with b8be0b549a56e0ba67b26e71ed2e1b52f8e53c9903df3a3b64a7a215806be8c6 not found: ID does not exist" Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.131204 5124 scope.go:117] "RemoveContainer" containerID="c0fa00bdb6aca53ef082ddcebc9ae78fb9a67a8144b98d8e9e0665eca1774a22" Jan 26 00:27:54 crc kubenswrapper[5124]: E0126 00:27:54.131395 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0fa00bdb6aca53ef082ddcebc9ae78fb9a67a8144b98d8e9e0665eca1774a22\": container with ID starting with c0fa00bdb6aca53ef082ddcebc9ae78fb9a67a8144b98d8e9e0665eca1774a22 not found: ID does not exist" containerID="c0fa00bdb6aca53ef082ddcebc9ae78fb9a67a8144b98d8e9e0665eca1774a22" Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.131418 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0fa00bdb6aca53ef082ddcebc9ae78fb9a67a8144b98d8e9e0665eca1774a22"} err="failed to get container status \"c0fa00bdb6aca53ef082ddcebc9ae78fb9a67a8144b98d8e9e0665eca1774a22\": rpc error: code = NotFound desc = could not find container \"c0fa00bdb6aca53ef082ddcebc9ae78fb9a67a8144b98d8e9e0665eca1774a22\": container with ID starting with c0fa00bdb6aca53ef082ddcebc9ae78fb9a67a8144b98d8e9e0665eca1774a22 not found: ID does not exist" Jan 26 00:27:54 crc kubenswrapper[5124]: I0126 00:27:54.372435 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee45aa6-eb87-4f96-a4ac-68fb27808e96" path="/var/lib/kubelet/pods/bee45aa6-eb87-4f96-a4ac-68fb27808e96/volumes" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.146660 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489788-snfzt"] Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.148014 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bee45aa6-eb87-4f96-a4ac-68fb27808e96" containerName="extract-utilities" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.148033 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee45aa6-eb87-4f96-a4ac-68fb27808e96" containerName="extract-utilities" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.148046 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bee45aa6-eb87-4f96-a4ac-68fb27808e96" containerName="registry-server" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.148053 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee45aa6-eb87-4f96-a4ac-68fb27808e96" containerName="registry-server" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.148076 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bee45aa6-eb87-4f96-a4ac-68fb27808e96" containerName="extract-content" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.148085 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee45aa6-eb87-4f96-a4ac-68fb27808e96" containerName="extract-content" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.148214 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="bee45aa6-eb87-4f96-a4ac-68fb27808e96" containerName="registry-server" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.152298 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-snfzt" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.154295 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-26tfw\"" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.155662 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.155895 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.161474 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-snfzt"] Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.233213 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmznl\" (UniqueName: \"kubernetes.io/projected/fd3d4863-cc6c-4e46-b225-602dd146a02a-kube-api-access-dmznl\") pod \"auto-csr-approver-29489788-snfzt\" (UID: \"fd3d4863-cc6c-4e46-b225-602dd146a02a\") " pod="openshift-infra/auto-csr-approver-29489788-snfzt" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.334492 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dmznl\" (UniqueName: \"kubernetes.io/projected/fd3d4863-cc6c-4e46-b225-602dd146a02a-kube-api-access-dmznl\") pod \"auto-csr-approver-29489788-snfzt\" (UID: \"fd3d4863-cc6c-4e46-b225-602dd146a02a\") " pod="openshift-infra/auto-csr-approver-29489788-snfzt" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.356037 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmznl\" (UniqueName: \"kubernetes.io/projected/fd3d4863-cc6c-4e46-b225-602dd146a02a-kube-api-access-dmznl\") pod \"auto-csr-approver-29489788-snfzt\" (UID: \"fd3d4863-cc6c-4e46-b225-602dd146a02a\") " pod="openshift-infra/auto-csr-approver-29489788-snfzt" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.480368 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-snfzt" Jan 26 00:28:00 crc kubenswrapper[5124]: I0126 00:28:00.670937 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-snfzt"] Jan 26 00:28:00 crc kubenswrapper[5124]: W0126 00:28:00.676329 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd3d4863_cc6c_4e46_b225_602dd146a02a.slice/crio-2cd14b3633ab8ed32726d2e0da52188b46a09ed31fe05af2fa14bc305c7156d0 WatchSource:0}: Error finding container 2cd14b3633ab8ed32726d2e0da52188b46a09ed31fe05af2fa14bc305c7156d0: Status 404 returned error can't find the container with id 2cd14b3633ab8ed32726d2e0da52188b46a09ed31fe05af2fa14bc305c7156d0 Jan 26 00:28:01 crc kubenswrapper[5124]: I0126 00:28:01.120862 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-snfzt" event={"ID":"fd3d4863-cc6c-4e46-b225-602dd146a02a","Type":"ContainerStarted","Data":"2cd14b3633ab8ed32726d2e0da52188b46a09ed31fe05af2fa14bc305c7156d0"} Jan 26 00:28:07 crc kubenswrapper[5124]: I0126 00:28:07.172814 5124 generic.go:358] "Generic (PLEG): container finished" podID="fd3d4863-cc6c-4e46-b225-602dd146a02a" containerID="0f41a0bd1d15b23f6a34995e5c886f0e347d929ba091d8884092350b058e9070" exitCode=0 Jan 26 00:28:07 crc kubenswrapper[5124]: I0126 00:28:07.175537 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-snfzt" event={"ID":"fd3d4863-cc6c-4e46-b225-602dd146a02a","Type":"ContainerDied","Data":"0f41a0bd1d15b23f6a34995e5c886f0e347d929ba091d8884092350b058e9070"} Jan 26 00:28:08 crc kubenswrapper[5124]: I0126 00:28:08.517645 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-snfzt" Jan 26 00:28:08 crc kubenswrapper[5124]: I0126 00:28:08.584214 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmznl\" (UniqueName: \"kubernetes.io/projected/fd3d4863-cc6c-4e46-b225-602dd146a02a-kube-api-access-dmznl\") pod \"fd3d4863-cc6c-4e46-b225-602dd146a02a\" (UID: \"fd3d4863-cc6c-4e46-b225-602dd146a02a\") " Jan 26 00:28:08 crc kubenswrapper[5124]: I0126 00:28:08.594250 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd3d4863-cc6c-4e46-b225-602dd146a02a-kube-api-access-dmznl" (OuterVolumeSpecName: "kube-api-access-dmznl") pod "fd3d4863-cc6c-4e46-b225-602dd146a02a" (UID: "fd3d4863-cc6c-4e46-b225-602dd146a02a"). InnerVolumeSpecName "kube-api-access-dmznl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:28:08 crc kubenswrapper[5124]: I0126 00:28:08.685441 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmznl\" (UniqueName: \"kubernetes.io/projected/fd3d4863-cc6c-4e46-b225-602dd146a02a-kube-api-access-dmznl\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:09 crc kubenswrapper[5124]: I0126 00:28:09.194681 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-snfzt" event={"ID":"fd3d4863-cc6c-4e46-b225-602dd146a02a","Type":"ContainerDied","Data":"2cd14b3633ab8ed32726d2e0da52188b46a09ed31fe05af2fa14bc305c7156d0"} Jan 26 00:28:09 crc kubenswrapper[5124]: I0126 00:28:09.194725 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cd14b3633ab8ed32726d2e0da52188b46a09ed31fe05af2fa14bc305c7156d0" Jan 26 00:28:09 crc kubenswrapper[5124]: I0126 00:28:09.194750 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-snfzt" Jan 26 00:28:09 crc kubenswrapper[5124]: I0126 00:28:09.574977 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-p756r"] Jan 26 00:28:09 crc kubenswrapper[5124]: I0126 00:28:09.583408 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-p756r"] Jan 26 00:28:10 crc kubenswrapper[5124]: I0126 00:28:10.373574 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c283d038-4574-4bf6-a5e3-203f888f1367" path="/var/lib/kubelet/pods/c283d038-4574-4bf6-a5e3-203f888f1367/volumes" Jan 26 00:28:42 crc kubenswrapper[5124]: I0126 00:28:42.707324 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_b7aad826-6989-4e26-bc04-f2d00bd4b0fa/docker-build/0.log" Jan 26 00:28:42 crc kubenswrapper[5124]: I0126 00:28:42.708246 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_b7aad826-6989-4e26-bc04-f2d00bd4b0fa/docker-build/0.log" Jan 26 00:28:42 crc kubenswrapper[5124]: I0126 00:28:42.709662 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_907f0bcb-9b75-4ab6-b721-88558878d13b/docker-build/0.log" Jan 26 00:28:42 crc kubenswrapper[5124]: I0126 00:28:42.709928 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_907f0bcb-9b75-4ab6-b721-88558878d13b/docker-build/0.log" Jan 26 00:28:42 crc kubenswrapper[5124]: I0126 00:28:42.711995 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_cd238caf-5876-429a-9f3a-594804065e20/docker-build/0.log" Jan 26 00:28:42 crc kubenswrapper[5124]: I0126 00:28:42.712354 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_cd238caf-5876-429a-9f3a-594804065e20/docker-build/0.log" Jan 26 00:28:42 crc kubenswrapper[5124]: I0126 00:28:42.753421 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-smnb7_f826f136-a910-4120-aa62-a08e427590c0/kube-multus/0.log" Jan 26 00:28:42 crc kubenswrapper[5124]: I0126 00:28:42.753566 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-smnb7_f826f136-a910-4120-aa62-a08e427590c0/kube-multus/0.log" Jan 26 00:28:42 crc kubenswrapper[5124]: I0126 00:28:42.767768 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:28:42 crc kubenswrapper[5124]: I0126 00:28:42.767835 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:28:45 crc kubenswrapper[5124]: I0126 00:28:45.430699 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_c36c92a4-fe0a-4de5-9c43-bea3a04112e8/docker-build/0.log" Jan 26 00:28:45 crc kubenswrapper[5124]: I0126 00:28:45.432356 5124 generic.go:358] "Generic (PLEG): container finished" podID="c36c92a4-fe0a-4de5-9c43-bea3a04112e8" containerID="3cf7d964e9d975efc1340e86d1775ba6d54d2fd489f296f7b2ae968cf5e5c9e7" exitCode=1 Jan 26 00:28:45 crc kubenswrapper[5124]: I0126 00:28:45.432496 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"c36c92a4-fe0a-4de5-9c43-bea3a04112e8","Type":"ContainerDied","Data":"3cf7d964e9d975efc1340e86d1775ba6d54d2fd489f296f7b2ae968cf5e5c9e7"} Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.669479 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_c36c92a4-fe0a-4de5-9c43-bea3a04112e8/docker-build/0.log" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.670189 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.754219 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-system-configs\") pod \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.755261 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-node-pullsecrets\") pod \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.755300 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-blob-cache\") pod \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.755331 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-container-storage-run\") pod \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.755385 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-container-storage-root\") pod \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.755407 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5cdv\" (UniqueName: \"kubernetes.io/projected/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-kube-api-access-m5cdv\") pod \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.755430 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-builder-dockercfg-cbnx8-push\") pod \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.755213 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c36c92a4-fe0a-4de5-9c43-bea3a04112e8" (UID: "c36c92a4-fe0a-4de5-9c43-bea3a04112e8"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.755568 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c36c92a4-fe0a-4de5-9c43-bea3a04112e8" (UID: "c36c92a4-fe0a-4de5-9c43-bea3a04112e8"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.756622 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c36c92a4-fe0a-4de5-9c43-bea3a04112e8" (UID: "c36c92a4-fe0a-4de5-9c43-bea3a04112e8"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.756739 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-buildworkdir\") pod \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.756854 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-buildcachedir\") pod \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.756906 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-ca-bundles\") pod \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.756990 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-builder-dockercfg-cbnx8-pull\") pod \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.756898 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c36c92a4-fe0a-4de5-9c43-bea3a04112e8" (UID: "c36c92a4-fe0a-4de5-9c43-bea3a04112e8"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.757077 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-proxy-ca-bundles\") pod \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\" (UID: \"c36c92a4-fe0a-4de5-9c43-bea3a04112e8\") " Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.757637 5124 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.757661 5124 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.757675 5124 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.757687 5124 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.757828 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c36c92a4-fe0a-4de5-9c43-bea3a04112e8" (UID: "c36c92a4-fe0a-4de5-9c43-bea3a04112e8"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.757844 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c36c92a4-fe0a-4de5-9c43-bea3a04112e8" (UID: "c36c92a4-fe0a-4de5-9c43-bea3a04112e8"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.761357 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-kube-api-access-m5cdv" (OuterVolumeSpecName: "kube-api-access-m5cdv") pod "c36c92a4-fe0a-4de5-9c43-bea3a04112e8" (UID: "c36c92a4-fe0a-4de5-9c43-bea3a04112e8"). InnerVolumeSpecName "kube-api-access-m5cdv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.761379 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-builder-dockercfg-cbnx8-push" (OuterVolumeSpecName: "builder-dockercfg-cbnx8-push") pod "c36c92a4-fe0a-4de5-9c43-bea3a04112e8" (UID: "c36c92a4-fe0a-4de5-9c43-bea3a04112e8"). InnerVolumeSpecName "builder-dockercfg-cbnx8-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.762365 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-builder-dockercfg-cbnx8-pull" (OuterVolumeSpecName: "builder-dockercfg-cbnx8-pull") pod "c36c92a4-fe0a-4de5-9c43-bea3a04112e8" (UID: "c36c92a4-fe0a-4de5-9c43-bea3a04112e8"). InnerVolumeSpecName "builder-dockercfg-cbnx8-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.789910 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c36c92a4-fe0a-4de5-9c43-bea3a04112e8" (UID: "c36c92a4-fe0a-4de5-9c43-bea3a04112e8"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.859376 5124 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.859420 5124 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.859432 5124 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-cbnx8-pull\" (UniqueName: \"kubernetes.io/secret/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-builder-dockercfg-cbnx8-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.859445 5124 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.859454 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5cdv\" (UniqueName: \"kubernetes.io/projected/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-kube-api-access-m5cdv\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.859462 5124 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-cbnx8-push\" (UniqueName: \"kubernetes.io/secret/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-builder-dockercfg-cbnx8-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:46 crc kubenswrapper[5124]: I0126 00:28:46.969640 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c36c92a4-fe0a-4de5-9c43-bea3a04112e8" (UID: "c36c92a4-fe0a-4de5-9c43-bea3a04112e8"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:28:47 crc kubenswrapper[5124]: I0126 00:28:47.061855 5124 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:47 crc kubenswrapper[5124]: I0126 00:28:47.450159 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_c36c92a4-fe0a-4de5-9c43-bea3a04112e8/docker-build/0.log" Jan 26 00:28:47 crc kubenswrapper[5124]: I0126 00:28:47.451000 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"c36c92a4-fe0a-4de5-9c43-bea3a04112e8","Type":"ContainerDied","Data":"777a6876d3cde325ceaa131dd493ec7a08538af4e3abc6b5a9907b18ad563591"} Jan 26 00:28:47 crc kubenswrapper[5124]: I0126 00:28:47.451040 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="777a6876d3cde325ceaa131dd493ec7a08538af4e3abc6b5a9907b18ad563591" Jan 26 00:28:47 crc kubenswrapper[5124]: I0126 00:28:47.451143 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:48 crc kubenswrapper[5124]: I0126 00:28:48.534954 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c36c92a4-fe0a-4de5-9c43-bea3a04112e8" (UID: "c36c92a4-fe0a-4de5-9c43-bea3a04112e8"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:28:48 crc kubenswrapper[5124]: I0126 00:28:48.580426 5124 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c36c92a4-fe0a-4de5-9c43-bea3a04112e8-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:52 crc kubenswrapper[5124]: I0126 00:28:52.316270 5124 scope.go:117] "RemoveContainer" containerID="7ed15fab4846cd64a3cf0394a3b36f1423d04511f8706eba7b29d2289ede7297" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.978804 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-knpgw/must-gather-jg8kf"] Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.980718 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c36c92a4-fe0a-4de5-9c43-bea3a04112e8" containerName="manage-dockerfile" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.980743 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36c92a4-fe0a-4de5-9c43-bea3a04112e8" containerName="manage-dockerfile" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.980763 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c36c92a4-fe0a-4de5-9c43-bea3a04112e8" containerName="docker-build" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.980773 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36c92a4-fe0a-4de5-9c43-bea3a04112e8" containerName="docker-build" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.980804 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fd3d4863-cc6c-4e46-b225-602dd146a02a" containerName="oc" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.980815 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd3d4863-cc6c-4e46-b225-602dd146a02a" containerName="oc" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.980838 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c36c92a4-fe0a-4de5-9c43-bea3a04112e8" containerName="git-clone" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.980849 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36c92a4-fe0a-4de5-9c43-bea3a04112e8" containerName="git-clone" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.981057 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="c36c92a4-fe0a-4de5-9c43-bea3a04112e8" containerName="docker-build" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.981086 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="fd3d4863-cc6c-4e46-b225-602dd146a02a" containerName="oc" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.989777 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knpgw/must-gather-jg8kf" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.993181 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-knpgw\"/\"default-dockercfg-262pk\"" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.993804 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-knpgw\"/\"kube-root-ca.crt\"" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.993836 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-knpgw\"/\"openshift-service-ca.crt\"" Jan 26 00:29:23 crc kubenswrapper[5124]: I0126 00:29:23.993844 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-knpgw/must-gather-jg8kf"] Jan 26 00:29:24 crc kubenswrapper[5124]: I0126 00:29:24.119259 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn5qc\" (UniqueName: \"kubernetes.io/projected/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025-kube-api-access-vn5qc\") pod \"must-gather-jg8kf\" (UID: \"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025\") " pod="openshift-must-gather-knpgw/must-gather-jg8kf" Jan 26 00:29:24 crc kubenswrapper[5124]: I0126 00:29:24.119387 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025-must-gather-output\") pod \"must-gather-jg8kf\" (UID: \"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025\") " pod="openshift-must-gather-knpgw/must-gather-jg8kf" Jan 26 00:29:24 crc kubenswrapper[5124]: I0126 00:29:24.220262 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025-must-gather-output\") pod \"must-gather-jg8kf\" (UID: \"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025\") " pod="openshift-must-gather-knpgw/must-gather-jg8kf" Jan 26 00:29:24 crc kubenswrapper[5124]: I0126 00:29:24.220357 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vn5qc\" (UniqueName: \"kubernetes.io/projected/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025-kube-api-access-vn5qc\") pod \"must-gather-jg8kf\" (UID: \"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025\") " pod="openshift-must-gather-knpgw/must-gather-jg8kf" Jan 26 00:29:24 crc kubenswrapper[5124]: I0126 00:29:24.220728 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025-must-gather-output\") pod \"must-gather-jg8kf\" (UID: \"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025\") " pod="openshift-must-gather-knpgw/must-gather-jg8kf" Jan 26 00:29:24 crc kubenswrapper[5124]: I0126 00:29:24.243189 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn5qc\" (UniqueName: \"kubernetes.io/projected/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025-kube-api-access-vn5qc\") pod \"must-gather-jg8kf\" (UID: \"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025\") " pod="openshift-must-gather-knpgw/must-gather-jg8kf" Jan 26 00:29:24 crc kubenswrapper[5124]: I0126 00:29:24.313656 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knpgw/must-gather-jg8kf" Jan 26 00:29:24 crc kubenswrapper[5124]: I0126 00:29:24.724229 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-knpgw/must-gather-jg8kf"] Jan 26 00:29:25 crc kubenswrapper[5124]: I0126 00:29:25.044925 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knpgw/must-gather-jg8kf" event={"ID":"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025","Type":"ContainerStarted","Data":"4b43bf36a4b5d6938976f974f4ad9ad62508d57e47dc4f477797d4ba562da33c"} Jan 26 00:29:30 crc kubenswrapper[5124]: I0126 00:29:30.079743 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knpgw/must-gather-jg8kf" event={"ID":"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025","Type":"ContainerStarted","Data":"505f63766366b0bf51e594f487df0da83a6bc356f9bc29d4138ae7f6c85fd7e7"} Jan 26 00:29:30 crc kubenswrapper[5124]: I0126 00:29:30.080296 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knpgw/must-gather-jg8kf" event={"ID":"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025","Type":"ContainerStarted","Data":"00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56"} Jan 26 00:29:30 crc kubenswrapper[5124]: I0126 00:29:30.094124 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-knpgw/must-gather-jg8kf" podStartSLOduration=2.389050869 podStartE2EDuration="7.094105709s" podCreationTimestamp="2026-01-26 00:29:23 +0000 UTC" firstStartedPulling="2026-01-26 00:29:24.73144262 +0000 UTC m=+1242.640361969" lastFinishedPulling="2026-01-26 00:29:29.43649746 +0000 UTC m=+1247.345416809" observedRunningTime="2026-01-26 00:29:30.093904924 +0000 UTC m=+1248.002824273" watchObservedRunningTime="2026-01-26 00:29:30.094105709 +0000 UTC m=+1248.003025058" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.135544 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg"] Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.150175 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489790-mrcqp"] Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.150651 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.154436 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.154473 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.154479 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-mrcqp" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.158874 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.159112 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.159133 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-26tfw\"" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.164624 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg"] Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.182120 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-mrcqp"] Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.261696 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2648a63a-ebfb-4071-9f6b-580c03a90285-secret-volume\") pod \"collect-profiles-29489790-88whg\" (UID: \"2648a63a-ebfb-4071-9f6b-580c03a90285\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.262259 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6l5m\" (UniqueName: \"kubernetes.io/projected/c475a9c3-f1b0-4f46-b392-4bf86411642c-kube-api-access-f6l5m\") pod \"auto-csr-approver-29489790-mrcqp\" (UID: \"c475a9c3-f1b0-4f46-b392-4bf86411642c\") " pod="openshift-infra/auto-csr-approver-29489790-mrcqp" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.262417 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6trjw\" (UniqueName: \"kubernetes.io/projected/2648a63a-ebfb-4071-9f6b-580c03a90285-kube-api-access-6trjw\") pod \"collect-profiles-29489790-88whg\" (UID: \"2648a63a-ebfb-4071-9f6b-580c03a90285\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.262576 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2648a63a-ebfb-4071-9f6b-580c03a90285-config-volume\") pod \"collect-profiles-29489790-88whg\" (UID: \"2648a63a-ebfb-4071-9f6b-580c03a90285\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.364004 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2648a63a-ebfb-4071-9f6b-580c03a90285-secret-volume\") pod \"collect-profiles-29489790-88whg\" (UID: \"2648a63a-ebfb-4071-9f6b-580c03a90285\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.364377 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f6l5m\" (UniqueName: \"kubernetes.io/projected/c475a9c3-f1b0-4f46-b392-4bf86411642c-kube-api-access-f6l5m\") pod \"auto-csr-approver-29489790-mrcqp\" (UID: \"c475a9c3-f1b0-4f46-b392-4bf86411642c\") " pod="openshift-infra/auto-csr-approver-29489790-mrcqp" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.364565 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6trjw\" (UniqueName: \"kubernetes.io/projected/2648a63a-ebfb-4071-9f6b-580c03a90285-kube-api-access-6trjw\") pod \"collect-profiles-29489790-88whg\" (UID: \"2648a63a-ebfb-4071-9f6b-580c03a90285\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.365173 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2648a63a-ebfb-4071-9f6b-580c03a90285-config-volume\") pod \"collect-profiles-29489790-88whg\" (UID: \"2648a63a-ebfb-4071-9f6b-580c03a90285\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.366323 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2648a63a-ebfb-4071-9f6b-580c03a90285-config-volume\") pod \"collect-profiles-29489790-88whg\" (UID: \"2648a63a-ebfb-4071-9f6b-580c03a90285\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.381790 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2648a63a-ebfb-4071-9f6b-580c03a90285-secret-volume\") pod \"collect-profiles-29489790-88whg\" (UID: \"2648a63a-ebfb-4071-9f6b-580c03a90285\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.381865 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6trjw\" (UniqueName: \"kubernetes.io/projected/2648a63a-ebfb-4071-9f6b-580c03a90285-kube-api-access-6trjw\") pod \"collect-profiles-29489790-88whg\" (UID: \"2648a63a-ebfb-4071-9f6b-580c03a90285\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.384042 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6l5m\" (UniqueName: \"kubernetes.io/projected/c475a9c3-f1b0-4f46-b392-4bf86411642c-kube-api-access-f6l5m\") pod \"auto-csr-approver-29489790-mrcqp\" (UID: \"c475a9c3-f1b0-4f46-b392-4bf86411642c\") " pod="openshift-infra/auto-csr-approver-29489790-mrcqp" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.471886 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.479691 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-mrcqp" Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.706392 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-mrcqp"] Jan 26 00:30:00 crc kubenswrapper[5124]: I0126 00:30:00.738840 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg"] Jan 26 00:30:01 crc kubenswrapper[5124]: I0126 00:30:01.318609 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-mrcqp" event={"ID":"c475a9c3-f1b0-4f46-b392-4bf86411642c","Type":"ContainerStarted","Data":"5d945c75888915afce7e0679fd78ed82f24415300d52ed93486e16b9c1f61923"} Jan 26 00:30:01 crc kubenswrapper[5124]: I0126 00:30:01.320262 5124 generic.go:358] "Generic (PLEG): container finished" podID="2648a63a-ebfb-4071-9f6b-580c03a90285" containerID="d7f954c3e4b0522f35c3e7327d1bca93fb089e91b4eeb2f7c7da0278cc682652" exitCode=0 Jan 26 00:30:01 crc kubenswrapper[5124]: I0126 00:30:01.320308 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" event={"ID":"2648a63a-ebfb-4071-9f6b-580c03a90285","Type":"ContainerDied","Data":"d7f954c3e4b0522f35c3e7327d1bca93fb089e91b4eeb2f7c7da0278cc682652"} Jan 26 00:30:01 crc kubenswrapper[5124]: I0126 00:30:01.320338 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" event={"ID":"2648a63a-ebfb-4071-9f6b-580c03a90285","Type":"ContainerStarted","Data":"89918eb281a4bf6eb8f2ae531517f3f76117307629db9905752022f3686080e9"} Jan 26 00:30:02 crc kubenswrapper[5124]: I0126 00:30:02.328076 5124 generic.go:358] "Generic (PLEG): container finished" podID="c475a9c3-f1b0-4f46-b392-4bf86411642c" containerID="dfc0077814741c04b7c9f3892a0b4f84f4b70a9bba9af13e11382be4ff37644d" exitCode=0 Jan 26 00:30:02 crc kubenswrapper[5124]: I0126 00:30:02.328164 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-mrcqp" event={"ID":"c475a9c3-f1b0-4f46-b392-4bf86411642c","Type":"ContainerDied","Data":"dfc0077814741c04b7c9f3892a0b4f84f4b70a9bba9af13e11382be4ff37644d"} Jan 26 00:30:02 crc kubenswrapper[5124]: I0126 00:30:02.550960 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:02 crc kubenswrapper[5124]: I0126 00:30:02.600620 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2648a63a-ebfb-4071-9f6b-580c03a90285-config-volume\") pod \"2648a63a-ebfb-4071-9f6b-580c03a90285\" (UID: \"2648a63a-ebfb-4071-9f6b-580c03a90285\") " Jan 26 00:30:02 crc kubenswrapper[5124]: I0126 00:30:02.600911 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6trjw\" (UniqueName: \"kubernetes.io/projected/2648a63a-ebfb-4071-9f6b-580c03a90285-kube-api-access-6trjw\") pod \"2648a63a-ebfb-4071-9f6b-580c03a90285\" (UID: \"2648a63a-ebfb-4071-9f6b-580c03a90285\") " Jan 26 00:30:02 crc kubenswrapper[5124]: I0126 00:30:02.600949 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2648a63a-ebfb-4071-9f6b-580c03a90285-secret-volume\") pod \"2648a63a-ebfb-4071-9f6b-580c03a90285\" (UID: \"2648a63a-ebfb-4071-9f6b-580c03a90285\") " Jan 26 00:30:02 crc kubenswrapper[5124]: I0126 00:30:02.602644 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2648a63a-ebfb-4071-9f6b-580c03a90285-config-volume" (OuterVolumeSpecName: "config-volume") pod "2648a63a-ebfb-4071-9f6b-580c03a90285" (UID: "2648a63a-ebfb-4071-9f6b-580c03a90285"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:02 crc kubenswrapper[5124]: I0126 00:30:02.613356 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2648a63a-ebfb-4071-9f6b-580c03a90285-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2648a63a-ebfb-4071-9f6b-580c03a90285" (UID: "2648a63a-ebfb-4071-9f6b-580c03a90285"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:30:02 crc kubenswrapper[5124]: I0126 00:30:02.618687 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2648a63a-ebfb-4071-9f6b-580c03a90285-kube-api-access-6trjw" (OuterVolumeSpecName: "kube-api-access-6trjw") pod "2648a63a-ebfb-4071-9f6b-580c03a90285" (UID: "2648a63a-ebfb-4071-9f6b-580c03a90285"). InnerVolumeSpecName "kube-api-access-6trjw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:30:02 crc kubenswrapper[5124]: I0126 00:30:02.702650 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6trjw\" (UniqueName: \"kubernetes.io/projected/2648a63a-ebfb-4071-9f6b-580c03a90285-kube-api-access-6trjw\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:02 crc kubenswrapper[5124]: I0126 00:30:02.702706 5124 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2648a63a-ebfb-4071-9f6b-580c03a90285-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:02 crc kubenswrapper[5124]: I0126 00:30:02.702719 5124 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2648a63a-ebfb-4071-9f6b-580c03a90285-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:03 crc kubenswrapper[5124]: I0126 00:30:03.337383 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" event={"ID":"2648a63a-ebfb-4071-9f6b-580c03a90285","Type":"ContainerDied","Data":"89918eb281a4bf6eb8f2ae531517f3f76117307629db9905752022f3686080e9"} Jan 26 00:30:03 crc kubenswrapper[5124]: I0126 00:30:03.337767 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89918eb281a4bf6eb8f2ae531517f3f76117307629db9905752022f3686080e9" Jan 26 00:30:03 crc kubenswrapper[5124]: I0126 00:30:03.337388 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-88whg" Jan 26 00:30:03 crc kubenswrapper[5124]: I0126 00:30:03.607994 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-mrcqp" Jan 26 00:30:03 crc kubenswrapper[5124]: I0126 00:30:03.716927 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6l5m\" (UniqueName: \"kubernetes.io/projected/c475a9c3-f1b0-4f46-b392-4bf86411642c-kube-api-access-f6l5m\") pod \"c475a9c3-f1b0-4f46-b392-4bf86411642c\" (UID: \"c475a9c3-f1b0-4f46-b392-4bf86411642c\") " Jan 26 00:30:03 crc kubenswrapper[5124]: I0126 00:30:03.721366 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c475a9c3-f1b0-4f46-b392-4bf86411642c-kube-api-access-f6l5m" (OuterVolumeSpecName: "kube-api-access-f6l5m") pod "c475a9c3-f1b0-4f46-b392-4bf86411642c" (UID: "c475a9c3-f1b0-4f46-b392-4bf86411642c"). InnerVolumeSpecName "kube-api-access-f6l5m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:30:03 crc kubenswrapper[5124]: I0126 00:30:03.818048 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f6l5m\" (UniqueName: \"kubernetes.io/projected/c475a9c3-f1b0-4f46-b392-4bf86411642c-kube-api-access-f6l5m\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:04 crc kubenswrapper[5124]: I0126 00:30:04.351459 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-mrcqp" Jan 26 00:30:04 crc kubenswrapper[5124]: I0126 00:30:04.351449 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-mrcqp" event={"ID":"c475a9c3-f1b0-4f46-b392-4bf86411642c","Type":"ContainerDied","Data":"5d945c75888915afce7e0679fd78ed82f24415300d52ed93486e16b9c1f61923"} Jan 26 00:30:04 crc kubenswrapper[5124]: I0126 00:30:04.351655 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d945c75888915afce7e0679fd78ed82f24415300d52ed93486e16b9c1f61923" Jan 26 00:30:04 crc kubenswrapper[5124]: I0126 00:30:04.687717 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-2zkk4"] Jan 26 00:30:04 crc kubenswrapper[5124]: I0126 00:30:04.696515 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-2zkk4"] Jan 26 00:30:06 crc kubenswrapper[5124]: I0126 00:30:06.375281 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b9c748-aa0b-49ff-8f11-47a7a1ca7512" path="/var/lib/kubelet/pods/e1b9c748-aa0b-49ff-8f11-47a7a1ca7512/volumes" Jan 26 00:30:09 crc kubenswrapper[5124]: I0126 00:30:09.582265 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-2xm5v_b3a1a33e-2dab-43f6-8c34-6ac84e05eb03/control-plane-machine-set-operator/0.log" Jan 26 00:30:09 crc kubenswrapper[5124]: I0126 00:30:09.736838 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-6629f_b9496837-38dd-4e08-bf40-9a191112e42a/machine-api-operator/0.log" Jan 26 00:30:09 crc kubenswrapper[5124]: I0126 00:30:09.745659 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-6629f_b9496837-38dd-4e08-bf40-9a191112e42a/kube-rbac-proxy/0.log" Jan 26 00:30:10 crc kubenswrapper[5124]: I0126 00:30:10.830180 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:30:10 crc kubenswrapper[5124]: I0126 00:30:10.830498 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:30:21 crc kubenswrapper[5124]: I0126 00:30:21.626101 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-cbk4b_e2e31e19-e327-45be-a96e-c0164687516e/cert-manager-controller/0.log" Jan 26 00:30:21 crc kubenswrapper[5124]: I0126 00:30:21.754253 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-dth5b_3a891fe1-31ca-4a63-bdba-3c5a8857eec1/cert-manager-cainjector/0.log" Jan 26 00:30:21 crc kubenswrapper[5124]: I0126 00:30:21.839043 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-9s5jb_4d221106-7c92-4968-8bb3-20be6806e046/cert-manager-webhook/0.log" Jan 26 00:30:36 crc kubenswrapper[5124]: I0126 00:30:36.263846 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-rdc79_55489b76-1256-4d20-b6ab-800ea25b615a/prometheus-operator/0.log" Jan 26 00:30:36 crc kubenswrapper[5124]: I0126 00:30:36.355931 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp_0eb54603-766c-4938-8f12-fcd1c1673213/prometheus-operator-admission-webhook/0.log" Jan 26 00:30:36 crc kubenswrapper[5124]: I0126 00:30:36.447705 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm_370cc157-a069-4b36-aee7-98b2607e01c3/prometheus-operator-admission-webhook/0.log" Jan 26 00:30:36 crc kubenswrapper[5124]: I0126 00:30:36.557661 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-dxwvg_54f9d0ba-a6be-4a87-a44f-80b2bc6c0879/operator/0.log" Jan 26 00:30:36 crc kubenswrapper[5124]: I0126 00:30:36.662740 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-xbrsv_f1927088-b361-4e51-ace6-c6029dd3239c/perses-operator/0.log" Jan 26 00:30:40 crc kubenswrapper[5124]: I0126 00:30:40.831032 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:30:40 crc kubenswrapper[5124]: I0126 00:30:40.831697 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:30:50 crc kubenswrapper[5124]: I0126 00:30:50.422887 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569_b87ef7de-04b2-4f6e-a380-8f3fc72b51d4/util/0.log" Jan 26 00:30:50 crc kubenswrapper[5124]: I0126 00:30:50.586939 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569_b87ef7de-04b2-4f6e-a380-8f3fc72b51d4/util/0.log" Jan 26 00:30:50 crc kubenswrapper[5124]: I0126 00:30:50.619347 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569_b87ef7de-04b2-4f6e-a380-8f3fc72b51d4/pull/0.log" Jan 26 00:30:50 crc kubenswrapper[5124]: I0126 00:30:50.668008 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569_b87ef7de-04b2-4f6e-a380-8f3fc72b51d4/pull/0.log" Jan 26 00:30:50 crc kubenswrapper[5124]: I0126 00:30:50.816374 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569_b87ef7de-04b2-4f6e-a380-8f3fc72b51d4/util/0.log" Jan 26 00:30:50 crc kubenswrapper[5124]: I0126 00:30:50.829721 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569_b87ef7de-04b2-4f6e-a380-8f3fc72b51d4/extract/0.log" Jan 26 00:30:50 crc kubenswrapper[5124]: I0126 00:30:50.877378 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arw569_b87ef7de-04b2-4f6e-a380-8f3fc72b51d4/pull/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.011007 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2_b03960d1-482f-4b9d-a654-3a8a185334e9/util/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.182462 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2_b03960d1-482f-4b9d-a654-3a8a185334e9/pull/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.190648 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2_b03960d1-482f-4b9d-a654-3a8a185334e9/util/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.208784 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2_b03960d1-482f-4b9d-a654-3a8a185334e9/pull/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.379422 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2_b03960d1-482f-4b9d-a654-3a8a185334e9/pull/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.396542 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2_b03960d1-482f-4b9d-a654-3a8a185334e9/extract/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.414655 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fghgw2_b03960d1-482f-4b9d-a654-3a8a185334e9/util/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.582031 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f_3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c/util/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.724609 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f_3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c/util/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.739324 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f_3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c/pull/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.781215 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f_3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c/pull/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.901168 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f_3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c/pull/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.901675 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f_3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c/extract/0.log" Jan 26 00:30:51 crc kubenswrapper[5124]: I0126 00:30:51.947729 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e9bb8f_3d1d6fa1-6660-4ff0-8195-3fb90ec72e2c/util/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.090066 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q_a4aff954-1afc-4dd4-8935-fa0cc1cebec6/util/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.258321 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q_a4aff954-1afc-4dd4-8935-fa0cc1cebec6/util/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.271736 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q_a4aff954-1afc-4dd4-8935-fa0cc1cebec6/pull/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.299463 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q_a4aff954-1afc-4dd4-8935-fa0cc1cebec6/pull/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.425368 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q_a4aff954-1afc-4dd4-8935-fa0cc1cebec6/pull/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.433378 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q_a4aff954-1afc-4dd4-8935-fa0cc1cebec6/util/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.454713 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08srb2q_a4aff954-1afc-4dd4-8935-fa0cc1cebec6/extract/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.465949 5124 scope.go:117] "RemoveContainer" containerID="7ee5c262734c1d12b0e010537b9bdf00b0bed56891f103531a465a30793fce02" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.637561 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9clqw_48b3ebb2-7731-4d34-b50d-a4ded959d5d4/extract-utilities/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.748041 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9clqw_48b3ebb2-7731-4d34-b50d-a4ded959d5d4/extract-utilities/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.772518 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9clqw_48b3ebb2-7731-4d34-b50d-a4ded959d5d4/extract-content/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.808108 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9clqw_48b3ebb2-7731-4d34-b50d-a4ded959d5d4/extract-content/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.932312 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9clqw_48b3ebb2-7731-4d34-b50d-a4ded959d5d4/extract-utilities/0.log" Jan 26 00:30:52 crc kubenswrapper[5124]: I0126 00:30:52.959340 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9clqw_48b3ebb2-7731-4d34-b50d-a4ded959d5d4/extract-content/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.061429 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9clqw_48b3ebb2-7731-4d34-b50d-a4ded959d5d4/registry-server/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.101125 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqj2s_e4afc7c4-f4b6-43f0-895d-d8eea95e4e44/extract-utilities/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.279281 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqj2s_e4afc7c4-f4b6-43f0-895d-d8eea95e4e44/extract-utilities/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.322707 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqj2s_e4afc7c4-f4b6-43f0-895d-d8eea95e4e44/extract-content/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.325170 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqj2s_e4afc7c4-f4b6-43f0-895d-d8eea95e4e44/extract-content/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.453448 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqj2s_e4afc7c4-f4b6-43f0-895d-d8eea95e4e44/extract-content/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.456176 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqj2s_e4afc7c4-f4b6-43f0-895d-d8eea95e4e44/extract-utilities/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.547824 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-btzrz_5bd59477-0d46-4047-a6b5-094ec66407f4/marketplace-operator/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.734093 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2lxw7_b3c103ac-5665-4af2-894d-ae43b0926b3f/extract-utilities/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.744852 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqj2s_e4afc7c4-f4b6-43f0-895d-d8eea95e4e44/registry-server/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.908041 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2lxw7_b3c103ac-5665-4af2-894d-ae43b0926b3f/extract-utilities/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.918368 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2lxw7_b3c103ac-5665-4af2-894d-ae43b0926b3f/extract-content/0.log" Jan 26 00:30:53 crc kubenswrapper[5124]: I0126 00:30:53.922345 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2lxw7_b3c103ac-5665-4af2-894d-ae43b0926b3f/extract-content/0.log" Jan 26 00:30:54 crc kubenswrapper[5124]: I0126 00:30:54.060673 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2lxw7_b3c103ac-5665-4af2-894d-ae43b0926b3f/extract-content/0.log" Jan 26 00:30:54 crc kubenswrapper[5124]: I0126 00:30:54.073676 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2lxw7_b3c103ac-5665-4af2-894d-ae43b0926b3f/extract-utilities/0.log" Jan 26 00:30:54 crc kubenswrapper[5124]: I0126 00:30:54.205150 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2lxw7_b3c103ac-5665-4af2-894d-ae43b0926b3f/registry-server/0.log" Jan 26 00:31:05 crc kubenswrapper[5124]: I0126 00:31:05.934163 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-rdc79_55489b76-1256-4d20-b6ab-800ea25b615a/prometheus-operator/0.log" Jan 26 00:31:05 crc kubenswrapper[5124]: I0126 00:31:05.936343 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66c4ff6d7c-sdqdp_0eb54603-766c-4938-8f12-fcd1c1673213/prometheus-operator-admission-webhook/0.log" Jan 26 00:31:05 crc kubenswrapper[5124]: I0126 00:31:05.953555 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66c4ff6d7c-wdcnm_370cc157-a069-4b36-aee7-98b2607e01c3/prometheus-operator-admission-webhook/0.log" Jan 26 00:31:06 crc kubenswrapper[5124]: I0126 00:31:06.078963 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-dxwvg_54f9d0ba-a6be-4a87-a44f-80b2bc6c0879/operator/0.log" Jan 26 00:31:06 crc kubenswrapper[5124]: I0126 00:31:06.128757 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-xbrsv_f1927088-b361-4e51-ace6-c6029dd3239c/perses-operator/0.log" Jan 26 00:31:10 crc kubenswrapper[5124]: I0126 00:31:10.830204 5124 patch_prober.go:28] interesting pod/machine-config-daemon-kmxcn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:31:10 crc kubenswrapper[5124]: I0126 00:31:10.830882 5124 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:31:10 crc kubenswrapper[5124]: I0126 00:31:10.830958 5124 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" Jan 26 00:31:10 crc kubenswrapper[5124]: I0126 00:31:10.832272 5124 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"82c9cf1ec6062ea01a6d002676f82275bc429fe3760dae651fc24fe679ab62b5"} pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:31:10 crc kubenswrapper[5124]: I0126 00:31:10.832421 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" podUID="95fa0656-150a-4d93-a324-77a1306d91f7" containerName="machine-config-daemon" containerID="cri-o://82c9cf1ec6062ea01a6d002676f82275bc429fe3760dae651fc24fe679ab62b5" gracePeriod=600 Jan 26 00:31:10 crc kubenswrapper[5124]: I0126 00:31:10.965621 5124 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:31:11 crc kubenswrapper[5124]: I0126 00:31:11.831178 5124 generic.go:358] "Generic (PLEG): container finished" podID="95fa0656-150a-4d93-a324-77a1306d91f7" containerID="82c9cf1ec6062ea01a6d002676f82275bc429fe3760dae651fc24fe679ab62b5" exitCode=0 Jan 26 00:31:11 crc kubenswrapper[5124]: I0126 00:31:11.831282 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerDied","Data":"82c9cf1ec6062ea01a6d002676f82275bc429fe3760dae651fc24fe679ab62b5"} Jan 26 00:31:11 crc kubenswrapper[5124]: I0126 00:31:11.832052 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kmxcn" event={"ID":"95fa0656-150a-4d93-a324-77a1306d91f7","Type":"ContainerStarted","Data":"310350cb44e39ad98f3eb8f2f489a0c8ac8591c4060639faae591e2e695466de"} Jan 26 00:31:11 crc kubenswrapper[5124]: I0126 00:31:11.832081 5124 scope.go:117] "RemoveContainer" containerID="e4364654e7244afc307256d5ab68b10d1fea1b2d37b15d2d92ab4bb0d2fa9068" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.190338 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d2lfb"] Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.192461 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c475a9c3-f1b0-4f46-b392-4bf86411642c" containerName="oc" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.192494 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="c475a9c3-f1b0-4f46-b392-4bf86411642c" containerName="oc" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.192627 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2648a63a-ebfb-4071-9f6b-580c03a90285" containerName="collect-profiles" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.192647 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="2648a63a-ebfb-4071-9f6b-580c03a90285" containerName="collect-profiles" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.192863 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="2648a63a-ebfb-4071-9f6b-580c03a90285" containerName="collect-profiles" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.192888 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="c475a9c3-f1b0-4f46-b392-4bf86411642c" containerName="oc" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.200328 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.205512 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2lfb"] Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.286297 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98f43ae-b12c-489d-a17f-f9993b5f32ed-catalog-content\") pod \"redhat-operators-d2lfb\" (UID: \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\") " pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.286460 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98f43ae-b12c-489d-a17f-f9993b5f32ed-utilities\") pod \"redhat-operators-d2lfb\" (UID: \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\") " pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.286633 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4pds\" (UniqueName: \"kubernetes.io/projected/c98f43ae-b12c-489d-a17f-f9993b5f32ed-kube-api-access-l4pds\") pod \"redhat-operators-d2lfb\" (UID: \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\") " pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.388806 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l4pds\" (UniqueName: \"kubernetes.io/projected/c98f43ae-b12c-489d-a17f-f9993b5f32ed-kube-api-access-l4pds\") pod \"redhat-operators-d2lfb\" (UID: \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\") " pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.388976 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98f43ae-b12c-489d-a17f-f9993b5f32ed-catalog-content\") pod \"redhat-operators-d2lfb\" (UID: \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\") " pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.389057 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98f43ae-b12c-489d-a17f-f9993b5f32ed-utilities\") pod \"redhat-operators-d2lfb\" (UID: \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\") " pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.389575 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98f43ae-b12c-489d-a17f-f9993b5f32ed-catalog-content\") pod \"redhat-operators-d2lfb\" (UID: \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\") " pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.389873 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98f43ae-b12c-489d-a17f-f9993b5f32ed-utilities\") pod \"redhat-operators-d2lfb\" (UID: \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\") " pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.420255 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4pds\" (UniqueName: \"kubernetes.io/projected/c98f43ae-b12c-489d-a17f-f9993b5f32ed-kube-api-access-l4pds\") pod \"redhat-operators-d2lfb\" (UID: \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\") " pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:13 crc kubenswrapper[5124]: I0126 00:31:13.524599 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:14 crc kubenswrapper[5124]: I0126 00:31:14.016334 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2lfb"] Jan 26 00:31:14 crc kubenswrapper[5124]: I0126 00:31:14.867710 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2lfb" event={"ID":"c98f43ae-b12c-489d-a17f-f9993b5f32ed","Type":"ContainerDied","Data":"e71a0299a0cf30b8dc21465bf109f0fef453a4d8a6fbe16526ecff287f8ecfc5"} Jan 26 00:31:14 crc kubenswrapper[5124]: I0126 00:31:14.867744 5124 generic.go:358] "Generic (PLEG): container finished" podID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" containerID="e71a0299a0cf30b8dc21465bf109f0fef453a4d8a6fbe16526ecff287f8ecfc5" exitCode=0 Jan 26 00:31:14 crc kubenswrapper[5124]: I0126 00:31:14.870009 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2lfb" event={"ID":"c98f43ae-b12c-489d-a17f-f9993b5f32ed","Type":"ContainerStarted","Data":"c9d394ab6afd3a1ec9df518eb27183884e42490282e604ed75ef5c7e2153c634"} Jan 26 00:31:15 crc kubenswrapper[5124]: I0126 00:31:15.881719 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2lfb" event={"ID":"c98f43ae-b12c-489d-a17f-f9993b5f32ed","Type":"ContainerStarted","Data":"bbeaf1356e8676dedb1343eb19c33aeeb3173d4c6cc4751c08459e1019a8ed5c"} Jan 26 00:31:16 crc kubenswrapper[5124]: I0126 00:31:16.892899 5124 generic.go:358] "Generic (PLEG): container finished" podID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" containerID="bbeaf1356e8676dedb1343eb19c33aeeb3173d4c6cc4751c08459e1019a8ed5c" exitCode=0 Jan 26 00:31:16 crc kubenswrapper[5124]: I0126 00:31:16.892958 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2lfb" event={"ID":"c98f43ae-b12c-489d-a17f-f9993b5f32ed","Type":"ContainerDied","Data":"bbeaf1356e8676dedb1343eb19c33aeeb3173d4c6cc4751c08459e1019a8ed5c"} Jan 26 00:31:17 crc kubenswrapper[5124]: I0126 00:31:17.907646 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2lfb" event={"ID":"c98f43ae-b12c-489d-a17f-f9993b5f32ed","Type":"ContainerStarted","Data":"01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6"} Jan 26 00:31:17 crc kubenswrapper[5124]: I0126 00:31:17.938493 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d2lfb" podStartSLOduration=4.341430861 podStartE2EDuration="4.938420734s" podCreationTimestamp="2026-01-26 00:31:13 +0000 UTC" firstStartedPulling="2026-01-26 00:31:14.871287103 +0000 UTC m=+1352.780206452" lastFinishedPulling="2026-01-26 00:31:15.468276946 +0000 UTC m=+1353.377196325" observedRunningTime="2026-01-26 00:31:17.936442142 +0000 UTC m=+1355.845361531" watchObservedRunningTime="2026-01-26 00:31:17.938420734 +0000 UTC m=+1355.847340113" Jan 26 00:31:22 crc kubenswrapper[5124]: I0126 00:31:22.931427 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7lpbz"] Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.370067 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lpbz"] Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.370236 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.435678 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edf3da10-279e-445d-8222-13acd2e9d515-utilities\") pod \"certified-operators-7lpbz\" (UID: \"edf3da10-279e-445d-8222-13acd2e9d515\") " pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.435763 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edf3da10-279e-445d-8222-13acd2e9d515-catalog-content\") pod \"certified-operators-7lpbz\" (UID: \"edf3da10-279e-445d-8222-13acd2e9d515\") " pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.435914 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdh6l\" (UniqueName: \"kubernetes.io/projected/edf3da10-279e-445d-8222-13acd2e9d515-kube-api-access-pdh6l\") pod \"certified-operators-7lpbz\" (UID: \"edf3da10-279e-445d-8222-13acd2e9d515\") " pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.524796 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.524838 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.537826 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edf3da10-279e-445d-8222-13acd2e9d515-catalog-content\") pod \"certified-operators-7lpbz\" (UID: \"edf3da10-279e-445d-8222-13acd2e9d515\") " pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.537898 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pdh6l\" (UniqueName: \"kubernetes.io/projected/edf3da10-279e-445d-8222-13acd2e9d515-kube-api-access-pdh6l\") pod \"certified-operators-7lpbz\" (UID: \"edf3da10-279e-445d-8222-13acd2e9d515\") " pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.537960 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edf3da10-279e-445d-8222-13acd2e9d515-utilities\") pod \"certified-operators-7lpbz\" (UID: \"edf3da10-279e-445d-8222-13acd2e9d515\") " pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.538353 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edf3da10-279e-445d-8222-13acd2e9d515-catalog-content\") pod \"certified-operators-7lpbz\" (UID: \"edf3da10-279e-445d-8222-13acd2e9d515\") " pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.538395 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edf3da10-279e-445d-8222-13acd2e9d515-utilities\") pod \"certified-operators-7lpbz\" (UID: \"edf3da10-279e-445d-8222-13acd2e9d515\") " pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.563870 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdh6l\" (UniqueName: \"kubernetes.io/projected/edf3da10-279e-445d-8222-13acd2e9d515-kube-api-access-pdh6l\") pod \"certified-operators-7lpbz\" (UID: \"edf3da10-279e-445d-8222-13acd2e9d515\") " pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.689531 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.916689 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lpbz"] Jan 26 00:31:23 crc kubenswrapper[5124]: I0126 00:31:23.966908 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lpbz" event={"ID":"edf3da10-279e-445d-8222-13acd2e9d515","Type":"ContainerStarted","Data":"44459be2cdc07a03f9ccc18364b2c813599f1428d7048cf54d7aa3e7bd0209b4"} Jan 26 00:31:24 crc kubenswrapper[5124]: I0126 00:31:24.567213 5124 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d2lfb" podUID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" containerName="registry-server" probeResult="failure" output=< Jan 26 00:31:24 crc kubenswrapper[5124]: timeout: failed to connect service ":50051" within 1s Jan 26 00:31:24 crc kubenswrapper[5124]: > Jan 26 00:31:24 crc kubenswrapper[5124]: I0126 00:31:24.974157 5124 generic.go:358] "Generic (PLEG): container finished" podID="edf3da10-279e-445d-8222-13acd2e9d515" containerID="abb2dd9a921ea0956cd8eeca92eea41dc708f7dbe868f5b7fcca9b38d7014e20" exitCode=0 Jan 26 00:31:24 crc kubenswrapper[5124]: I0126 00:31:24.976995 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lpbz" event={"ID":"edf3da10-279e-445d-8222-13acd2e9d515","Type":"ContainerDied","Data":"abb2dd9a921ea0956cd8eeca92eea41dc708f7dbe868f5b7fcca9b38d7014e20"} Jan 26 00:31:25 crc kubenswrapper[5124]: I0126 00:31:25.988182 5124 generic.go:358] "Generic (PLEG): container finished" podID="edf3da10-279e-445d-8222-13acd2e9d515" containerID="3173aa242185a25ebf0e6366ca49a1f52368058f614cef7550c57f0161439e9d" exitCode=0 Jan 26 00:31:25 crc kubenswrapper[5124]: I0126 00:31:25.988263 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lpbz" event={"ID":"edf3da10-279e-445d-8222-13acd2e9d515","Type":"ContainerDied","Data":"3173aa242185a25ebf0e6366ca49a1f52368058f614cef7550c57f0161439e9d"} Jan 26 00:31:27 crc kubenswrapper[5124]: I0126 00:31:27.011082 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lpbz" event={"ID":"edf3da10-279e-445d-8222-13acd2e9d515","Type":"ContainerStarted","Data":"584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b"} Jan 26 00:31:27 crc kubenswrapper[5124]: I0126 00:31:27.035825 5124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7lpbz" podStartSLOduration=4.367025971 podStartE2EDuration="5.035800762s" podCreationTimestamp="2026-01-26 00:31:22 +0000 UTC" firstStartedPulling="2026-01-26 00:31:24.975252168 +0000 UTC m=+1362.884171517" lastFinishedPulling="2026-01-26 00:31:25.644026959 +0000 UTC m=+1363.552946308" observedRunningTime="2026-01-26 00:31:27.029949464 +0000 UTC m=+1364.938868833" watchObservedRunningTime="2026-01-26 00:31:27.035800762 +0000 UTC m=+1364.944720121" Jan 26 00:31:33 crc kubenswrapper[5124]: I0126 00:31:33.584978 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:33 crc kubenswrapper[5124]: I0126 00:31:33.667215 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:33 crc kubenswrapper[5124]: I0126 00:31:33.692605 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:33 crc kubenswrapper[5124]: I0126 00:31:33.692665 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:33 crc kubenswrapper[5124]: I0126 00:31:33.755406 5124 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:34 crc kubenswrapper[5124]: I0126 00:31:34.111067 5124 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:35 crc kubenswrapper[5124]: I0126 00:31:35.304444 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lpbz"] Jan 26 00:31:36 crc kubenswrapper[5124]: I0126 00:31:36.086982 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7lpbz" podUID="edf3da10-279e-445d-8222-13acd2e9d515" containerName="registry-server" containerID="cri-o://584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b" gracePeriod=2 Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.038930 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.098846 5124 generic.go:358] "Generic (PLEG): container finished" podID="edf3da10-279e-445d-8222-13acd2e9d515" containerID="584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b" exitCode=0 Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.098960 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lpbz" event={"ID":"edf3da10-279e-445d-8222-13acd2e9d515","Type":"ContainerDied","Data":"584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b"} Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.098987 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lpbz" event={"ID":"edf3da10-279e-445d-8222-13acd2e9d515","Type":"ContainerDied","Data":"44459be2cdc07a03f9ccc18364b2c813599f1428d7048cf54d7aa3e7bd0209b4"} Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.099005 5124 scope.go:117] "RemoveContainer" containerID="584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.099163 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lpbz" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.104293 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2lfb"] Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.104566 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d2lfb" podUID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" containerName="registry-server" containerID="cri-o://01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6" gracePeriod=2 Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.123339 5124 scope.go:117] "RemoveContainer" containerID="3173aa242185a25ebf0e6366ca49a1f52368058f614cef7550c57f0161439e9d" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.148196 5124 scope.go:117] "RemoveContainer" containerID="abb2dd9a921ea0956cd8eeca92eea41dc708f7dbe868f5b7fcca9b38d7014e20" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.165169 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edf3da10-279e-445d-8222-13acd2e9d515-utilities\") pod \"edf3da10-279e-445d-8222-13acd2e9d515\" (UID: \"edf3da10-279e-445d-8222-13acd2e9d515\") " Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.165254 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdh6l\" (UniqueName: \"kubernetes.io/projected/edf3da10-279e-445d-8222-13acd2e9d515-kube-api-access-pdh6l\") pod \"edf3da10-279e-445d-8222-13acd2e9d515\" (UID: \"edf3da10-279e-445d-8222-13acd2e9d515\") " Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.165320 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edf3da10-279e-445d-8222-13acd2e9d515-catalog-content\") pod \"edf3da10-279e-445d-8222-13acd2e9d515\" (UID: \"edf3da10-279e-445d-8222-13acd2e9d515\") " Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.166849 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edf3da10-279e-445d-8222-13acd2e9d515-utilities" (OuterVolumeSpecName: "utilities") pod "edf3da10-279e-445d-8222-13acd2e9d515" (UID: "edf3da10-279e-445d-8222-13acd2e9d515"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.175034 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf3da10-279e-445d-8222-13acd2e9d515-kube-api-access-pdh6l" (OuterVolumeSpecName: "kube-api-access-pdh6l") pod "edf3da10-279e-445d-8222-13acd2e9d515" (UID: "edf3da10-279e-445d-8222-13acd2e9d515"). InnerVolumeSpecName "kube-api-access-pdh6l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.213455 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edf3da10-279e-445d-8222-13acd2e9d515-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edf3da10-279e-445d-8222-13acd2e9d515" (UID: "edf3da10-279e-445d-8222-13acd2e9d515"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.267155 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edf3da10-279e-445d-8222-13acd2e9d515-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.267225 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pdh6l\" (UniqueName: \"kubernetes.io/projected/edf3da10-279e-445d-8222-13acd2e9d515-kube-api-access-pdh6l\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.267235 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edf3da10-279e-445d-8222-13acd2e9d515-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.281740 5124 scope.go:117] "RemoveContainer" containerID="584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b" Jan 26 00:31:37 crc kubenswrapper[5124]: E0126 00:31:37.282733 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b\": container with ID starting with 584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b not found: ID does not exist" containerID="584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.282810 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b"} err="failed to get container status \"584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b\": rpc error: code = NotFound desc = could not find container \"584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b\": container with ID starting with 584dba5c21016f370b476e93cfefb0f86738a1593fd4ba422eeebd6099b4b16b not found: ID does not exist" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.282851 5124 scope.go:117] "RemoveContainer" containerID="3173aa242185a25ebf0e6366ca49a1f52368058f614cef7550c57f0161439e9d" Jan 26 00:31:37 crc kubenswrapper[5124]: E0126 00:31:37.283690 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3173aa242185a25ebf0e6366ca49a1f52368058f614cef7550c57f0161439e9d\": container with ID starting with 3173aa242185a25ebf0e6366ca49a1f52368058f614cef7550c57f0161439e9d not found: ID does not exist" containerID="3173aa242185a25ebf0e6366ca49a1f52368058f614cef7550c57f0161439e9d" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.283740 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3173aa242185a25ebf0e6366ca49a1f52368058f614cef7550c57f0161439e9d"} err="failed to get container status \"3173aa242185a25ebf0e6366ca49a1f52368058f614cef7550c57f0161439e9d\": rpc error: code = NotFound desc = could not find container \"3173aa242185a25ebf0e6366ca49a1f52368058f614cef7550c57f0161439e9d\": container with ID starting with 3173aa242185a25ebf0e6366ca49a1f52368058f614cef7550c57f0161439e9d not found: ID does not exist" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.283769 5124 scope.go:117] "RemoveContainer" containerID="abb2dd9a921ea0956cd8eeca92eea41dc708f7dbe868f5b7fcca9b38d7014e20" Jan 26 00:31:37 crc kubenswrapper[5124]: E0126 00:31:37.284410 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abb2dd9a921ea0956cd8eeca92eea41dc708f7dbe868f5b7fcca9b38d7014e20\": container with ID starting with abb2dd9a921ea0956cd8eeca92eea41dc708f7dbe868f5b7fcca9b38d7014e20 not found: ID does not exist" containerID="abb2dd9a921ea0956cd8eeca92eea41dc708f7dbe868f5b7fcca9b38d7014e20" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.284452 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb2dd9a921ea0956cd8eeca92eea41dc708f7dbe868f5b7fcca9b38d7014e20"} err="failed to get container status \"abb2dd9a921ea0956cd8eeca92eea41dc708f7dbe868f5b7fcca9b38d7014e20\": rpc error: code = NotFound desc = could not find container \"abb2dd9a921ea0956cd8eeca92eea41dc708f7dbe868f5b7fcca9b38d7014e20\": container with ID starting with abb2dd9a921ea0956cd8eeca92eea41dc708f7dbe868f5b7fcca9b38d7014e20 not found: ID does not exist" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.435642 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lpbz"] Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.440975 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7lpbz"] Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.487487 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.571282 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98f43ae-b12c-489d-a17f-f9993b5f32ed-utilities\") pod \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\" (UID: \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\") " Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.571515 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4pds\" (UniqueName: \"kubernetes.io/projected/c98f43ae-b12c-489d-a17f-f9993b5f32ed-kube-api-access-l4pds\") pod \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\" (UID: \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\") " Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.571629 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98f43ae-b12c-489d-a17f-f9993b5f32ed-catalog-content\") pod \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\" (UID: \"c98f43ae-b12c-489d-a17f-f9993b5f32ed\") " Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.573193 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c98f43ae-b12c-489d-a17f-f9993b5f32ed-utilities" (OuterVolumeSpecName: "utilities") pod "c98f43ae-b12c-489d-a17f-f9993b5f32ed" (UID: "c98f43ae-b12c-489d-a17f-f9993b5f32ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.576427 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c98f43ae-b12c-489d-a17f-f9993b5f32ed-kube-api-access-l4pds" (OuterVolumeSpecName: "kube-api-access-l4pds") pod "c98f43ae-b12c-489d-a17f-f9993b5f32ed" (UID: "c98f43ae-b12c-489d-a17f-f9993b5f32ed"). InnerVolumeSpecName "kube-api-access-l4pds". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.663547 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c98f43ae-b12c-489d-a17f-f9993b5f32ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c98f43ae-b12c-489d-a17f-f9993b5f32ed" (UID: "c98f43ae-b12c-489d-a17f-f9993b5f32ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.675050 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l4pds\" (UniqueName: \"kubernetes.io/projected/c98f43ae-b12c-489d-a17f-f9993b5f32ed-kube-api-access-l4pds\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.675123 5124 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98f43ae-b12c-489d-a17f-f9993b5f32ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:37 crc kubenswrapper[5124]: I0126 00:31:37.675150 5124 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98f43ae-b12c-489d-a17f-f9993b5f32ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.108480 5124 generic.go:358] "Generic (PLEG): container finished" podID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" containerID="01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6" exitCode=0 Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.108519 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2lfb" event={"ID":"c98f43ae-b12c-489d-a17f-f9993b5f32ed","Type":"ContainerDied","Data":"01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6"} Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.108567 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2lfb" event={"ID":"c98f43ae-b12c-489d-a17f-f9993b5f32ed","Type":"ContainerDied","Data":"c9d394ab6afd3a1ec9df518eb27183884e42490282e604ed75ef5c7e2153c634"} Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.108604 5124 scope.go:117] "RemoveContainer" containerID="01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6" Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.108612 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2lfb" Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.142976 5124 scope.go:117] "RemoveContainer" containerID="bbeaf1356e8676dedb1343eb19c33aeeb3173d4c6cc4751c08459e1019a8ed5c" Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.147779 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2lfb"] Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.153744 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d2lfb"] Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.177915 5124 scope.go:117] "RemoveContainer" containerID="e71a0299a0cf30b8dc21465bf109f0fef453a4d8a6fbe16526ecff287f8ecfc5" Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.200421 5124 scope.go:117] "RemoveContainer" containerID="01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6" Jan 26 00:31:38 crc kubenswrapper[5124]: E0126 00:31:38.201004 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6\": container with ID starting with 01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6 not found: ID does not exist" containerID="01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6" Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.201119 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6"} err="failed to get container status \"01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6\": rpc error: code = NotFound desc = could not find container \"01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6\": container with ID starting with 01749d3c6284f703448becb30c48894a726dae173f480d2cf82cf3f6d58d25d6 not found: ID does not exist" Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.201198 5124 scope.go:117] "RemoveContainer" containerID="bbeaf1356e8676dedb1343eb19c33aeeb3173d4c6cc4751c08459e1019a8ed5c" Jan 26 00:31:38 crc kubenswrapper[5124]: E0126 00:31:38.201794 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbeaf1356e8676dedb1343eb19c33aeeb3173d4c6cc4751c08459e1019a8ed5c\": container with ID starting with bbeaf1356e8676dedb1343eb19c33aeeb3173d4c6cc4751c08459e1019a8ed5c not found: ID does not exist" containerID="bbeaf1356e8676dedb1343eb19c33aeeb3173d4c6cc4751c08459e1019a8ed5c" Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.201854 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbeaf1356e8676dedb1343eb19c33aeeb3173d4c6cc4751c08459e1019a8ed5c"} err="failed to get container status \"bbeaf1356e8676dedb1343eb19c33aeeb3173d4c6cc4751c08459e1019a8ed5c\": rpc error: code = NotFound desc = could not find container \"bbeaf1356e8676dedb1343eb19c33aeeb3173d4c6cc4751c08459e1019a8ed5c\": container with ID starting with bbeaf1356e8676dedb1343eb19c33aeeb3173d4c6cc4751c08459e1019a8ed5c not found: ID does not exist" Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.201886 5124 scope.go:117] "RemoveContainer" containerID="e71a0299a0cf30b8dc21465bf109f0fef453a4d8a6fbe16526ecff287f8ecfc5" Jan 26 00:31:38 crc kubenswrapper[5124]: E0126 00:31:38.202350 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e71a0299a0cf30b8dc21465bf109f0fef453a4d8a6fbe16526ecff287f8ecfc5\": container with ID starting with e71a0299a0cf30b8dc21465bf109f0fef453a4d8a6fbe16526ecff287f8ecfc5 not found: ID does not exist" containerID="e71a0299a0cf30b8dc21465bf109f0fef453a4d8a6fbe16526ecff287f8ecfc5" Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.202404 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e71a0299a0cf30b8dc21465bf109f0fef453a4d8a6fbe16526ecff287f8ecfc5"} err="failed to get container status \"e71a0299a0cf30b8dc21465bf109f0fef453a4d8a6fbe16526ecff287f8ecfc5\": rpc error: code = NotFound desc = could not find container \"e71a0299a0cf30b8dc21465bf109f0fef453a4d8a6fbe16526ecff287f8ecfc5\": container with ID starting with e71a0299a0cf30b8dc21465bf109f0fef453a4d8a6fbe16526ecff287f8ecfc5 not found: ID does not exist" Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.379265 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" path="/var/lib/kubelet/pods/c98f43ae-b12c-489d-a17f-f9993b5f32ed/volumes" Jan 26 00:31:38 crc kubenswrapper[5124]: I0126 00:31:38.380570 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edf3da10-279e-445d-8222-13acd2e9d515" path="/var/lib/kubelet/pods/edf3da10-279e-445d-8222-13acd2e9d515/volumes" Jan 26 00:31:49 crc kubenswrapper[5124]: I0126 00:31:49.201895 5124 generic.go:358] "Generic (PLEG): container finished" podID="cf0bc329-d2c3-484c-8a9a-0c5a38c0e025" containerID="00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56" exitCode=0 Jan 26 00:31:49 crc kubenswrapper[5124]: I0126 00:31:49.202116 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knpgw/must-gather-jg8kf" event={"ID":"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025","Type":"ContainerDied","Data":"00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56"} Jan 26 00:31:49 crc kubenswrapper[5124]: I0126 00:31:49.203129 5124 scope.go:117] "RemoveContainer" containerID="00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56" Jan 26 00:31:50 crc kubenswrapper[5124]: I0126 00:31:50.102184 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-knpgw_must-gather-jg8kf_cf0bc329-d2c3-484c-8a9a-0c5a38c0e025/gather/0.log" Jan 26 00:31:56 crc kubenswrapper[5124]: I0126 00:31:56.317192 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-knpgw/must-gather-jg8kf"] Jan 26 00:31:56 crc kubenswrapper[5124]: I0126 00:31:56.317958 5124 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-knpgw/must-gather-jg8kf" podUID="cf0bc329-d2c3-484c-8a9a-0c5a38c0e025" containerName="copy" containerID="cri-o://505f63766366b0bf51e594f487df0da83a6bc356f9bc29d4138ae7f6c85fd7e7" gracePeriod=2 Jan 26 00:31:56 crc kubenswrapper[5124]: I0126 00:31:56.321839 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-knpgw/must-gather-jg8kf"] Jan 26 00:31:56 crc kubenswrapper[5124]: I0126 00:31:56.746063 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-knpgw_must-gather-jg8kf_cf0bc329-d2c3-484c-8a9a-0c5a38c0e025/copy/0.log" Jan 26 00:31:56 crc kubenswrapper[5124]: I0126 00:31:56.746854 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knpgw/must-gather-jg8kf" Jan 26 00:31:56 crc kubenswrapper[5124]: I0126 00:31:56.813155 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn5qc\" (UniqueName: \"kubernetes.io/projected/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025-kube-api-access-vn5qc\") pod \"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025\" (UID: \"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025\") " Jan 26 00:31:56 crc kubenswrapper[5124]: I0126 00:31:56.813579 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025-must-gather-output\") pod \"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025\" (UID: \"cf0bc329-d2c3-484c-8a9a-0c5a38c0e025\") " Jan 26 00:31:56 crc kubenswrapper[5124]: I0126 00:31:56.827205 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025-kube-api-access-vn5qc" (OuterVolumeSpecName: "kube-api-access-vn5qc") pod "cf0bc329-d2c3-484c-8a9a-0c5a38c0e025" (UID: "cf0bc329-d2c3-484c-8a9a-0c5a38c0e025"). InnerVolumeSpecName "kube-api-access-vn5qc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:31:56 crc kubenswrapper[5124]: I0126 00:31:56.866222 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cf0bc329-d2c3-484c-8a9a-0c5a38c0e025" (UID: "cf0bc329-d2c3-484c-8a9a-0c5a38c0e025"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:31:56 crc kubenswrapper[5124]: I0126 00:31:56.915425 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vn5qc\" (UniqueName: \"kubernetes.io/projected/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025-kube-api-access-vn5qc\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:56 crc kubenswrapper[5124]: I0126 00:31:56.915475 5124 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:57 crc kubenswrapper[5124]: I0126 00:31:57.268978 5124 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-knpgw_must-gather-jg8kf_cf0bc329-d2c3-484c-8a9a-0c5a38c0e025/copy/0.log" Jan 26 00:31:57 crc kubenswrapper[5124]: I0126 00:31:57.269451 5124 generic.go:358] "Generic (PLEG): container finished" podID="cf0bc329-d2c3-484c-8a9a-0c5a38c0e025" containerID="505f63766366b0bf51e594f487df0da83a6bc356f9bc29d4138ae7f6c85fd7e7" exitCode=143 Jan 26 00:31:57 crc kubenswrapper[5124]: I0126 00:31:57.269515 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knpgw/must-gather-jg8kf" Jan 26 00:31:57 crc kubenswrapper[5124]: I0126 00:31:57.269612 5124 scope.go:117] "RemoveContainer" containerID="505f63766366b0bf51e594f487df0da83a6bc356f9bc29d4138ae7f6c85fd7e7" Jan 26 00:31:57 crc kubenswrapper[5124]: I0126 00:31:57.290872 5124 scope.go:117] "RemoveContainer" containerID="00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56" Jan 26 00:31:57 crc kubenswrapper[5124]: I0126 00:31:57.362402 5124 scope.go:117] "RemoveContainer" containerID="505f63766366b0bf51e594f487df0da83a6bc356f9bc29d4138ae7f6c85fd7e7" Jan 26 00:31:57 crc kubenswrapper[5124]: E0126 00:31:57.363672 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505f63766366b0bf51e594f487df0da83a6bc356f9bc29d4138ae7f6c85fd7e7\": container with ID starting with 505f63766366b0bf51e594f487df0da83a6bc356f9bc29d4138ae7f6c85fd7e7 not found: ID does not exist" containerID="505f63766366b0bf51e594f487df0da83a6bc356f9bc29d4138ae7f6c85fd7e7" Jan 26 00:31:57 crc kubenswrapper[5124]: I0126 00:31:57.364051 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505f63766366b0bf51e594f487df0da83a6bc356f9bc29d4138ae7f6c85fd7e7"} err="failed to get container status \"505f63766366b0bf51e594f487df0da83a6bc356f9bc29d4138ae7f6c85fd7e7\": rpc error: code = NotFound desc = could not find container \"505f63766366b0bf51e594f487df0da83a6bc356f9bc29d4138ae7f6c85fd7e7\": container with ID starting with 505f63766366b0bf51e594f487df0da83a6bc356f9bc29d4138ae7f6c85fd7e7 not found: ID does not exist" Jan 26 00:31:57 crc kubenswrapper[5124]: I0126 00:31:57.364086 5124 scope.go:117] "RemoveContainer" containerID="00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56" Jan 26 00:31:57 crc kubenswrapper[5124]: E0126 00:31:57.365250 5124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56\": container with ID starting with 00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56 not found: ID does not exist" containerID="00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56" Jan 26 00:31:57 crc kubenswrapper[5124]: I0126 00:31:57.365302 5124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56"} err="failed to get container status \"00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56\": rpc error: code = NotFound desc = could not find container \"00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56\": container with ID starting with 00d71d5e0bd49fd66009defdb30bf13828f3607065ed8c8b27032bcee1b11d56 not found: ID does not exist" Jan 26 00:31:58 crc kubenswrapper[5124]: I0126 00:31:58.375394 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf0bc329-d2c3-484c-8a9a-0c5a38c0e025" path="/var/lib/kubelet/pods/cf0bc329-d2c3-484c-8a9a-0c5a38c0e025/volumes" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.153377 5124 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489792-r8mn8"] Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155657 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="edf3da10-279e-445d-8222-13acd2e9d515" containerName="extract-content" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155705 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf3da10-279e-445d-8222-13acd2e9d515" containerName="extract-content" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155749 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" containerName="extract-utilities" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155763 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" containerName="extract-utilities" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155804 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf0bc329-d2c3-484c-8a9a-0c5a38c0e025" containerName="gather" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155818 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0bc329-d2c3-484c-8a9a-0c5a38c0e025" containerName="gather" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155835 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="edf3da10-279e-445d-8222-13acd2e9d515" containerName="extract-utilities" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155848 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf3da10-279e-445d-8222-13acd2e9d515" containerName="extract-utilities" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155865 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" containerName="extract-content" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155878 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" containerName="extract-content" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155907 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" containerName="registry-server" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155919 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" containerName="registry-server" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155934 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf0bc329-d2c3-484c-8a9a-0c5a38c0e025" containerName="copy" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155947 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0bc329-d2c3-484c-8a9a-0c5a38c0e025" containerName="copy" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155966 5124 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="edf3da10-279e-445d-8222-13acd2e9d515" containerName="registry-server" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.155978 5124 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf3da10-279e-445d-8222-13acd2e9d515" containerName="registry-server" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.156173 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf0bc329-d2c3-484c-8a9a-0c5a38c0e025" containerName="copy" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.156218 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf0bc329-d2c3-484c-8a9a-0c5a38c0e025" containerName="gather" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.156238 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="edf3da10-279e-445d-8222-13acd2e9d515" containerName="registry-server" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.156254 5124 memory_manager.go:356] "RemoveStaleState removing state" podUID="c98f43ae-b12c-489d-a17f-f9993b5f32ed" containerName="registry-server" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.176079 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489792-r8mn8"] Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.176280 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-r8mn8" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.179919 5124 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-26tfw\"" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.180035 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.180190 5124 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.260921 5124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jccvl\" (UniqueName: \"kubernetes.io/projected/1bc55feb-de3d-4e05-8694-1774f854711a-kube-api-access-jccvl\") pod \"auto-csr-approver-29489792-r8mn8\" (UID: \"1bc55feb-de3d-4e05-8694-1774f854711a\") " pod="openshift-infra/auto-csr-approver-29489792-r8mn8" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.363164 5124 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jccvl\" (UniqueName: \"kubernetes.io/projected/1bc55feb-de3d-4e05-8694-1774f854711a-kube-api-access-jccvl\") pod \"auto-csr-approver-29489792-r8mn8\" (UID: \"1bc55feb-de3d-4e05-8694-1774f854711a\") " pod="openshift-infra/auto-csr-approver-29489792-r8mn8" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.395125 5124 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jccvl\" (UniqueName: \"kubernetes.io/projected/1bc55feb-de3d-4e05-8694-1774f854711a-kube-api-access-jccvl\") pod \"auto-csr-approver-29489792-r8mn8\" (UID: \"1bc55feb-de3d-4e05-8694-1774f854711a\") " pod="openshift-infra/auto-csr-approver-29489792-r8mn8" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.504290 5124 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-r8mn8" Jan 26 00:32:00 crc kubenswrapper[5124]: I0126 00:32:00.772017 5124 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489792-r8mn8"] Jan 26 00:32:00 crc kubenswrapper[5124]: W0126 00:32:00.780844 5124 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bc55feb_de3d_4e05_8694_1774f854711a.slice/crio-6332eb16a86471087e9423314227bcb6405fbb65e2fceeded7dbd11aa589b86b WatchSource:0}: Error finding container 6332eb16a86471087e9423314227bcb6405fbb65e2fceeded7dbd11aa589b86b: Status 404 returned error can't find the container with id 6332eb16a86471087e9423314227bcb6405fbb65e2fceeded7dbd11aa589b86b Jan 26 00:32:01 crc kubenswrapper[5124]: I0126 00:32:01.300985 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489792-r8mn8" event={"ID":"1bc55feb-de3d-4e05-8694-1774f854711a","Type":"ContainerStarted","Data":"6332eb16a86471087e9423314227bcb6405fbb65e2fceeded7dbd11aa589b86b"} Jan 26 00:32:03 crc kubenswrapper[5124]: I0126 00:32:03.324287 5124 generic.go:358] "Generic (PLEG): container finished" podID="1bc55feb-de3d-4e05-8694-1774f854711a" containerID="3020d52d8fe1f83fc2c1bd7971fec9903ac919ebfe6f6f9df16d7a011e232347" exitCode=0 Jan 26 00:32:03 crc kubenswrapper[5124]: I0126 00:32:03.324644 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489792-r8mn8" event={"ID":"1bc55feb-de3d-4e05-8694-1774f854711a","Type":"ContainerDied","Data":"3020d52d8fe1f83fc2c1bd7971fec9903ac919ebfe6f6f9df16d7a011e232347"} Jan 26 00:32:04 crc kubenswrapper[5124]: I0126 00:32:04.676032 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-r8mn8" Jan 26 00:32:04 crc kubenswrapper[5124]: I0126 00:32:04.759702 5124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jccvl\" (UniqueName: \"kubernetes.io/projected/1bc55feb-de3d-4e05-8694-1774f854711a-kube-api-access-jccvl\") pod \"1bc55feb-de3d-4e05-8694-1774f854711a\" (UID: \"1bc55feb-de3d-4e05-8694-1774f854711a\") " Jan 26 00:32:04 crc kubenswrapper[5124]: I0126 00:32:04.767810 5124 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc55feb-de3d-4e05-8694-1774f854711a-kube-api-access-jccvl" (OuterVolumeSpecName: "kube-api-access-jccvl") pod "1bc55feb-de3d-4e05-8694-1774f854711a" (UID: "1bc55feb-de3d-4e05-8694-1774f854711a"). InnerVolumeSpecName "kube-api-access-jccvl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:32:04 crc kubenswrapper[5124]: I0126 00:32:04.861761 5124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jccvl\" (UniqueName: \"kubernetes.io/projected/1bc55feb-de3d-4e05-8694-1774f854711a-kube-api-access-jccvl\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:05 crc kubenswrapper[5124]: I0126 00:32:05.345674 5124 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-r8mn8" Jan 26 00:32:05 crc kubenswrapper[5124]: I0126 00:32:05.346060 5124 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489792-r8mn8" event={"ID":"1bc55feb-de3d-4e05-8694-1774f854711a","Type":"ContainerDied","Data":"6332eb16a86471087e9423314227bcb6405fbb65e2fceeded7dbd11aa589b86b"} Jan 26 00:32:05 crc kubenswrapper[5124]: I0126 00:32:05.346187 5124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6332eb16a86471087e9423314227bcb6405fbb65e2fceeded7dbd11aa589b86b" Jan 26 00:32:05 crc kubenswrapper[5124]: I0126 00:32:05.728024 5124 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-46cnn"] Jan 26 00:32:05 crc kubenswrapper[5124]: I0126 00:32:05.732172 5124 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-46cnn"] Jan 26 00:32:06 crc kubenswrapper[5124]: I0126 00:32:06.379683 5124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0161f4-0739-4dad-b0fb-cb065fec2d03" path="/var/lib/kubelet/pods/bb0161f4-0739-4dad-b0fb-cb065fec2d03/volumes" Jan 26 00:32:52 crc kubenswrapper[5124]: I0126 00:32:52.619442 5124 scope.go:117] "RemoveContainer" containerID="0df3319170851245e973cf4100474630f30023e31dfdd3766cd0e08dedb142e2"